2025-09-29 05:25:03.178960 | Job console starting 2025-09-29 05:25:03.195793 | Updating git repos 2025-09-29 05:25:03.253498 | Cloning repos into workspace 2025-09-29 05:25:03.473154 | Restoring repo states 2025-09-29 05:25:03.502254 | Merging changes 2025-09-29 05:25:03.502275 | Checking out repos 2025-09-29 05:25:03.784077 | Preparing playbooks 2025-09-29 05:25:04.448038 | Running Ansible setup 2025-09-29 05:25:08.396179 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-29 05:25:09.149047 | 2025-09-29 05:25:09.149213 | PLAY [Base pre] 2025-09-29 05:25:09.175029 | 2025-09-29 05:25:09.175225 | TASK [Setup log path fact] 2025-09-29 05:25:09.199164 | orchestrator | ok 2025-09-29 05:25:09.217258 | 2025-09-29 05:25:09.217496 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-29 05:25:09.260027 | orchestrator | ok 2025-09-29 05:25:09.271961 | 2025-09-29 05:25:09.272085 | TASK [emit-job-header : Print job information] 2025-09-29 05:25:09.317238 | # Job Information 2025-09-29 05:25:09.317524 | Ansible Version: 2.16.14 2025-09-29 05:25:09.317578 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-29 05:25:09.317626 | Pipeline: post 2025-09-29 05:25:09.317659 | Executor: 521e9411259a 2025-09-29 05:25:09.317689 | Triggered by: https://github.com/osism/testbed/commit/167d8d6c84435a326728aa8b9269a5ade27f34bd 2025-09-29 05:25:09.317721 | Event ID: a224b000-9cf4-11f0-87ec-d1c96a199043 2025-09-29 05:25:09.326176 | 2025-09-29 05:25:09.326294 | LOOP [emit-job-header : Print node information] 2025-09-29 05:25:09.439040 | orchestrator | ok: 2025-09-29 05:25:09.439231 | orchestrator | # Node Information 2025-09-29 05:25:09.439265 | orchestrator | Inventory Hostname: orchestrator 2025-09-29 05:25:09.439290 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-29 05:25:09.439312 | orchestrator | Username: zuul-testbed03 2025-09-29 05:25:09.439333 | orchestrator | Distro: Debian 12.12 2025-09-29 05:25:09.439356 | orchestrator | Provider: static-testbed 2025-09-29 05:25:09.439396 | orchestrator | Region: 2025-09-29 05:25:09.439418 | orchestrator | Label: testbed-orchestrator 2025-09-29 05:25:09.439438 | orchestrator | Product Name: OpenStack Nova 2025-09-29 05:25:09.439459 | orchestrator | Interface IP: 81.163.193.140 2025-09-29 05:25:09.464880 | 2025-09-29 05:25:09.465042 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-29 05:25:09.936547 | orchestrator -> localhost | changed 2025-09-29 05:25:09.946252 | 2025-09-29 05:25:09.946410 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-29 05:25:10.994410 | orchestrator -> localhost | changed 2025-09-29 05:25:11.011353 | 2025-09-29 05:25:11.011517 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-29 05:25:11.300116 | orchestrator -> localhost | ok 2025-09-29 05:25:11.314332 | 2025-09-29 05:25:11.314549 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-29 05:25:11.349708 | orchestrator | ok 2025-09-29 05:25:11.369547 | orchestrator | included: /var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-29 05:25:11.377946 | 2025-09-29 05:25:11.378049 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-29 05:25:12.405475 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-29 05:25:12.405772 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/6d60e6d45652465bae1d8101981b86c3_id_rsa 2025-09-29 05:25:12.405828 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/6d60e6d45652465bae1d8101981b86c3_id_rsa.pub 2025-09-29 05:25:12.405869 | orchestrator -> localhost | The key fingerprint is: 2025-09-29 05:25:12.405906 | orchestrator -> localhost | SHA256:CcK7lmRrdkqy7IL3nHZFaSSVyUUJZFpEcrLn/uOMElE zuul-build-sshkey 2025-09-29 05:25:12.405941 | orchestrator -> localhost | The key's randomart image is: 2025-09-29 05:25:12.405991 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-29 05:25:12.406023 | orchestrator -> localhost | | o*X*o. | 2025-09-29 05:25:12.406057 | orchestrator -> localhost | | . .OE . | 2025-09-29 05:25:12.406089 | orchestrator -> localhost | | o ++.. | 2025-09-29 05:25:12.406120 | orchestrator -> localhost | | o.++. | 2025-09-29 05:25:12.406150 | orchestrator -> localhost | | + +S | 2025-09-29 05:25:12.406188 | orchestrator -> localhost | | o +... | 2025-09-29 05:25:12.406222 | orchestrator -> localhost | |. . O .o. | 2025-09-29 05:25:12.406253 | orchestrator -> localhost | |o..O.+o +. | 2025-09-29 05:25:12.406284 | orchestrator -> localhost | | ++o=. ...+. | 2025-09-29 05:25:12.406315 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-29 05:25:12.406430 | orchestrator -> localhost | ok: Runtime: 0:00:00.536480 2025-09-29 05:25:12.414403 | 2025-09-29 05:25:12.414514 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-29 05:25:12.443794 | orchestrator | ok 2025-09-29 05:25:12.454135 | orchestrator | included: /var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-29 05:25:12.463642 | 2025-09-29 05:25:12.463746 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-29 05:25:12.487190 | orchestrator | skipping: Conditional result was False 2025-09-29 05:25:12.495413 | 2025-09-29 05:25:12.495637 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-29 05:25:13.052003 | orchestrator | changed 2025-09-29 05:25:13.058348 | 2025-09-29 05:25:13.058529 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-29 05:25:13.307261 | orchestrator | ok 2025-09-29 05:25:13.315693 | 2025-09-29 05:25:13.315814 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-29 05:25:13.705332 | orchestrator | ok 2025-09-29 05:25:13.712587 | 2025-09-29 05:25:13.712701 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-29 05:25:14.084145 | orchestrator | ok 2025-09-29 05:25:14.091445 | 2025-09-29 05:25:14.091574 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-29 05:25:14.116038 | orchestrator | skipping: Conditional result was False 2025-09-29 05:25:14.125781 | 2025-09-29 05:25:14.125924 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-29 05:25:14.555070 | orchestrator -> localhost | changed 2025-09-29 05:25:14.576975 | 2025-09-29 05:25:14.577100 | TASK [add-build-sshkey : Add back temp key] 2025-09-29 05:25:14.921304 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/6d60e6d45652465bae1d8101981b86c3_id_rsa (zuul-build-sshkey) 2025-09-29 05:25:14.921870 | orchestrator -> localhost | ok: Runtime: 0:00:00.022067 2025-09-29 05:25:14.936315 | 2025-09-29 05:25:14.936477 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-29 05:25:15.339141 | orchestrator | ok 2025-09-29 05:25:15.346632 | 2025-09-29 05:25:15.346751 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-29 05:25:15.371607 | orchestrator | skipping: Conditional result was False 2025-09-29 05:25:15.423602 | 2025-09-29 05:25:15.423729 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-29 05:25:15.801010 | orchestrator | ok 2025-09-29 05:25:15.813567 | 2025-09-29 05:25:15.813689 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-29 05:25:15.852834 | orchestrator | ok 2025-09-29 05:25:15.860155 | 2025-09-29 05:25:15.860262 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-29 05:25:16.148867 | orchestrator -> localhost | ok 2025-09-29 05:25:16.166659 | 2025-09-29 05:25:16.166814 | TASK [validate-host : Collect information about the host] 2025-09-29 05:25:17.376254 | orchestrator | ok 2025-09-29 05:25:17.390020 | 2025-09-29 05:25:17.390139 | TASK [validate-host : Sanitize hostname] 2025-09-29 05:25:17.455935 | orchestrator | ok 2025-09-29 05:25:17.464144 | 2025-09-29 05:25:17.464360 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-29 05:25:18.024979 | orchestrator -> localhost | changed 2025-09-29 05:25:18.032002 | 2025-09-29 05:25:18.032113 | TASK [validate-host : Collect information about zuul worker] 2025-09-29 05:25:18.462790 | orchestrator | ok 2025-09-29 05:25:18.471901 | 2025-09-29 05:25:18.472044 | TASK [validate-host : Write out all zuul information for each host] 2025-09-29 05:25:19.024263 | orchestrator -> localhost | changed 2025-09-29 05:25:19.042090 | 2025-09-29 05:25:19.042231 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-29 05:25:19.327325 | orchestrator | ok 2025-09-29 05:25:19.333442 | 2025-09-29 05:25:19.333544 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-29 05:25:57.431729 | orchestrator | changed: 2025-09-29 05:25:57.432020 | orchestrator | .d..t...... src/ 2025-09-29 05:25:57.432076 | orchestrator | .d..t...... src/github.com/ 2025-09-29 05:25:57.432118 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-29 05:25:57.432155 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-29 05:25:57.432190 | orchestrator | RedHat.yml 2025-09-29 05:25:57.448721 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-29 05:25:57.448738 | orchestrator | RedHat.yml 2025-09-29 05:25:57.448790 | orchestrator | = 2.2.0"... 2025-09-29 05:26:08.555768 | orchestrator | 05:26:08.555 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-29 05:26:08.579814 | orchestrator | 05:26:08.579 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-29 05:26:08.734371 | orchestrator | 05:26:08.734 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-29 05:26:09.191762 | orchestrator | 05:26:09.191 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-29 05:26:09.263430 | orchestrator | 05:26:09.263 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-29 05:26:09.928679 | orchestrator | 05:26:09.928 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-29 05:26:09.999509 | orchestrator | 05:26:09.999 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-29 05:26:10.418824 | orchestrator | 05:26:10.418 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-29 05:26:10.418894 | orchestrator | 05:26:10.418 STDOUT terraform: Providers are signed by their developers. 2025-09-29 05:26:10.418901 | orchestrator | 05:26:10.418 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-29 05:26:10.418908 | orchestrator | 05:26:10.418 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-29 05:26:10.418955 | orchestrator | 05:26:10.418 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-29 05:26:10.419058 | orchestrator | 05:26:10.418 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-29 05:26:10.419105 | orchestrator | 05:26:10.419 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-29 05:26:10.419142 | orchestrator | 05:26:10.419 STDOUT terraform: you run "tofu init" in the future. 2025-09-29 05:26:10.419186 | orchestrator | 05:26:10.419 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-29 05:26:10.419237 | orchestrator | 05:26:10.419 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-29 05:26:10.419288 | orchestrator | 05:26:10.419 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-29 05:26:10.419296 | orchestrator | 05:26:10.419 STDOUT terraform: should now work. 2025-09-29 05:26:10.419351 | orchestrator | 05:26:10.419 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-29 05:26:10.419403 | orchestrator | 05:26:10.419 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-29 05:26:10.419451 | orchestrator | 05:26:10.419 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-29 05:26:10.703396 | orchestrator | 05:26:10.703 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-29 05:26:10.703466 | orchestrator | 05:26:10.703 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-29 05:26:10.703486 | orchestrator | 05:26:10.703 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-29 05:26:10.703492 | orchestrator | 05:26:10.703 STDOUT terraform: for this configuration. 2025-09-29 05:26:10.920213 | orchestrator | 05:26:10.919 STDOUT terraform: ci.auto.tfvars 2025-09-29 05:26:10.922819 | orchestrator | 05:26:10.922 STDOUT terraform: default_custom.tf 2025-09-29 05:26:11.825125 | orchestrator | 05:26:11.824 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-29 05:26:12.336386 | orchestrator | 05:26:12.336 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-29 05:26:12.566295 | orchestrator | 05:26:12.566 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-29 05:26:12.566362 | orchestrator | 05:26:12.566 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-29 05:26:12.566370 | orchestrator | 05:26:12.566 STDOUT terraform:  + create 2025-09-29 05:26:12.566377 | orchestrator | 05:26:12.566 STDOUT terraform:  <= read (data resources) 2025-09-29 05:26:12.566383 | orchestrator | 05:26:12.566 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-29 05:26:12.566389 | orchestrator | 05:26:12.566 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-29 05:26:12.566436 | orchestrator | 05:26:12.566 STDOUT terraform:  # (config refers to values not yet known) 2025-09-29 05:26:12.566444 | orchestrator | 05:26:12.566 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-29 05:26:12.566503 | orchestrator | 05:26:12.566 STDOUT terraform:  + checksum = (known after apply) 2025-09-29 05:26:12.566531 | orchestrator | 05:26:12.566 STDOUT terraform:  + created_at = (known after apply) 2025-09-29 05:26:12.566564 | orchestrator | 05:26:12.566 STDOUT terraform:  + file = (known after apply) 2025-09-29 05:26:12.566589 | orchestrator | 05:26:12.566 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.566636 | orchestrator | 05:26:12.566 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.566687 | orchestrator | 05:26:12.566 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-29 05:26:12.566716 | orchestrator | 05:26:12.566 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-29 05:26:12.566744 | orchestrator | 05:26:12.566 STDOUT terraform:  + most_recent = true 2025-09-29 05:26:12.566765 | orchestrator | 05:26:12.566 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.566792 | orchestrator | 05:26:12.566 STDOUT terraform:  + protected = (known after apply) 2025-09-29 05:26:12.566824 | orchestrator | 05:26:12.566 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.566848 | orchestrator | 05:26:12.566 STDOUT terraform:  + schema = (known after apply) 2025-09-29 05:26:12.566882 | orchestrator | 05:26:12.566 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-29 05:26:12.566913 | orchestrator | 05:26:12.566 STDOUT terraform:  + tags = (known after apply) 2025-09-29 05:26:12.566939 | orchestrator | 05:26:12.566 STDOUT terraform:  + updated_at = (known after apply) 2025-09-29 05:26:12.566955 | orchestrator | 05:26:12.566 STDOUT terraform:  } 2025-09-29 05:26:12.566999 | orchestrator | 05:26:12.566 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-29 05:26:12.567027 | orchestrator | 05:26:12.566 STDOUT terraform:  # (config refers to values not yet known) 2025-09-29 05:26:12.567064 | orchestrator | 05:26:12.567 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-29 05:26:12.567091 | orchestrator | 05:26:12.567 STDOUT terraform:  + checksum = (known after apply) 2025-09-29 05:26:12.567142 | orchestrator | 05:26:12.567 STDOUT terraform:  + created_at = (known after apply) 2025-09-29 05:26:12.567149 | orchestrator | 05:26:12.567 STDOUT terraform:  + file = (known after apply) 2025-09-29 05:26:12.567183 | orchestrator | 05:26:12.567 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.567222 | orchestrator | 05:26:12.567 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.567239 | orchestrator | 05:26:12.567 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-29 05:26:12.567267 | orchestrator | 05:26:12.567 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-29 05:26:12.567293 | orchestrator | 05:26:12.567 STDOUT terraform:  + most_recent = true 2025-09-29 05:26:12.567315 | orchestrator | 05:26:12.567 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.567342 | orchestrator | 05:26:12.567 STDOUT terraform:  + protected = (known after apply) 2025-09-29 05:26:12.567390 | orchestrator | 05:26:12.567 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.567407 | orchestrator | 05:26:12.567 STDOUT terraform:  + schema = (known after apply) 2025-09-29 05:26:12.567433 | orchestrator | 05:26:12.567 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-29 05:26:12.567471 | orchestrator | 05:26:12.567 STDOUT terraform:  + tags = (known after apply) 2025-09-29 05:26:12.567489 | orchestrator | 05:26:12.567 STDOUT terraform:  + updated_at = (known after apply) 2025-09-29 05:26:12.567495 | orchestrator | 05:26:12.567 STDOUT terraform:  } 2025-09-29 05:26:12.567542 | orchestrator | 05:26:12.567 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-29 05:26:12.567564 | orchestrator | 05:26:12.567 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-29 05:26:12.567599 | orchestrator | 05:26:12.567 STDOUT terraform:  + content = (known after apply) 2025-09-29 05:26:12.567651 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-29 05:26:12.567677 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-29 05:26:12.567716 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-29 05:26:12.567745 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-29 05:26:12.567787 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-29 05:26:12.567813 | orchestrator | 05:26:12.567 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-29 05:26:12.567838 | orchestrator | 05:26:12.567 STDOUT terraform:  + directory_permission = "0777" 2025-09-29 05:26:12.567871 | orchestrator | 05:26:12.567 STDOUT terraform:  + file_permission = "0644" 2025-09-29 05:26:12.567900 | orchestrator | 05:26:12.567 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-29 05:26:12.567935 | orchestrator | 05:26:12.567 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.567963 | orchestrator | 05:26:12.567 STDOUT terraform:  } 2025-09-29 05:26:12.567969 | orchestrator | 05:26:12.567 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-29 05:26:12.567996 | orchestrator | 05:26:12.567 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-29 05:26:12.568044 | orchestrator | 05:26:12.567 STDOUT terraform:  + content = (known after apply) 2025-09-29 05:26:12.568064 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-29 05:26:12.568098 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-29 05:26:12.568134 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-29 05:26:12.568169 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-29 05:26:12.568203 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-29 05:26:12.568238 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-29 05:26:12.568280 | orchestrator | 05:26:12.568 STDOUT terraform:  + directory_permission = "0777" 2025-09-29 05:26:12.568287 | orchestrator | 05:26:12.568 STDOUT terraform:  + file_permission = "0644" 2025-09-29 05:26:12.568316 | orchestrator | 05:26:12.568 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-29 05:26:12.568360 | orchestrator | 05:26:12.568 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.568366 | orchestrator | 05:26:12.568 STDOUT terraform:  } 2025-09-29 05:26:12.568394 | orchestrator | 05:26:12.568 STDOUT terraform:  # local_file.inventory will be created 2025-09-29 05:26:12.568420 | orchestrator | 05:26:12.568 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-29 05:26:12.568455 | orchestrator | 05:26:12.568 STDOUT terraform:  + content = (known after apply) 2025-09-29 05:26:12.568488 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-29 05:26:12.568524 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-29 05:26:12.568556 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-29 05:26:12.568600 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-29 05:26:12.568623 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-29 05:26:12.568687 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-29 05:26:12.568704 | orchestrator | 05:26:12.568 STDOUT terraform:  + directory_permission = "0777" 2025-09-29 05:26:12.568727 | orchestrator | 05:26:12.568 STDOUT terraform:  + file_permission = "0644" 2025-09-29 05:26:12.568766 | orchestrator | 05:26:12.568 STDOUT terraform:  + filename = "inventory.ci" 2025-09-29 05:26:12.568796 | orchestrator | 05:26:12.568 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.568802 | orchestrator | 05:26:12.568 STDOUT terraform:  } 2025-09-29 05:26:12.568837 | orchestrator | 05:26:12.568 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-29 05:26:12.568857 | orchestrator | 05:26:12.568 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-29 05:26:12.568887 | orchestrator | 05:26:12.568 STDOUT terraform:  + content = (sensitive value) 2025-09-29 05:26:12.568933 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-29 05:26:12.568963 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-29 05:26:12.569011 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-29 05:26:12.569034 | orchestrator | 05:26:12.568 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-29 05:26:12.569068 | orchestrator | 05:26:12.569 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-29 05:26:12.569104 | orchestrator | 05:26:12.569 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-29 05:26:12.569127 | orchestrator | 05:26:12.569 STDOUT terraform:  + directory_permission = "0700" 2025-09-29 05:26:12.569149 | orchestrator | 05:26:12.569 STDOUT terraform:  + file_permission = "0600" 2025-09-29 05:26:12.569179 | orchestrator | 05:26:12.569 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-29 05:26:12.569212 | orchestrator | 05:26:12.569 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.569218 | orchestrator | 05:26:12.569 STDOUT terraform:  } 2025-09-29 05:26:12.569257 | orchestrator | 05:26:12.569 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-29 05:26:12.569278 | orchestrator | 05:26:12.569 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-29 05:26:12.569298 | orchestrator | 05:26:12.569 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.569305 | orchestrator | 05:26:12.569 STDOUT terraform:  } 2025-09-29 05:26:12.569357 | orchestrator | 05:26:12.569 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-29 05:26:12.569413 | orchestrator | 05:26:12.569 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-29 05:26:12.569436 | orchestrator | 05:26:12.569 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.569461 | orchestrator | 05:26:12.569 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.569496 | orchestrator | 05:26:12.569 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.569530 | orchestrator | 05:26:12.569 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.569571 | orchestrator | 05:26:12.569 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.569606 | orchestrator | 05:26:12.569 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-29 05:26:12.569663 | orchestrator | 05:26:12.569 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.569684 | orchestrator | 05:26:12.569 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.569707 | orchestrator | 05:26:12.569 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.569743 | orchestrator | 05:26:12.569 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.569747 | orchestrator | 05:26:12.569 STDOUT terraform:  } 2025-09-29 05:26:12.569796 | orchestrator | 05:26:12.569 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-29 05:26:12.569839 | orchestrator | 05:26:12.569 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.569878 | orchestrator | 05:26:12.569 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.569903 | orchestrator | 05:26:12.569 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.569939 | orchestrator | 05:26:12.569 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.569978 | orchestrator | 05:26:12.569 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.570008 | orchestrator | 05:26:12.569 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.570073 | orchestrator | 05:26:12.570 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-29 05:26:12.570122 | orchestrator | 05:26:12.570 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.570129 | orchestrator | 05:26:12.570 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.570134 | orchestrator | 05:26:12.570 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.570162 | orchestrator | 05:26:12.570 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.570169 | orchestrator | 05:26:12.570 STDOUT terraform:  } 2025-09-29 05:26:12.570218 | orchestrator | 05:26:12.570 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-29 05:26:12.570259 | orchestrator | 05:26:12.570 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.570303 | orchestrator | 05:26:12.570 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.570329 | orchestrator | 05:26:12.570 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.570381 | orchestrator | 05:26:12.570 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.570434 | orchestrator | 05:26:12.570 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.570490 | orchestrator | 05:26:12.570 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.570545 | orchestrator | 05:26:12.570 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-29 05:26:12.570582 | orchestrator | 05:26:12.570 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.570604 | orchestrator | 05:26:12.570 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.570630 | orchestrator | 05:26:12.570 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.570673 | orchestrator | 05:26:12.570 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.570680 | orchestrator | 05:26:12.570 STDOUT terraform:  } 2025-09-29 05:26:12.570722 | orchestrator | 05:26:12.570 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-29 05:26:12.570766 | orchestrator | 05:26:12.570 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.570799 | orchestrator | 05:26:12.570 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.570824 | orchestrator | 05:26:12.570 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.570864 | orchestrator | 05:26:12.570 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.570901 | orchestrator | 05:26:12.570 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.570936 | orchestrator | 05:26:12.570 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.570979 | orchestrator | 05:26:12.570 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-29 05:26:12.571014 | orchestrator | 05:26:12.570 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.571034 | orchestrator | 05:26:12.571 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.571066 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.571083 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.571089 | orchestrator | 05:26:12.571 STDOUT terraform:  } 2025-09-29 05:26:12.571139 | orchestrator | 05:26:12.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-29 05:26:12.571185 | orchestrator | 05:26:12.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.571224 | orchestrator | 05:26:12.571 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.571245 | orchestrator | 05:26:12.571 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.571288 | orchestrator | 05:26:12.571 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.571327 | orchestrator | 05:26:12.571 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.571362 | orchestrator | 05:26:12.571 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.571405 | orchestrator | 05:26:12.571 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-29 05:26:12.571440 | orchestrator | 05:26:12.571 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.571460 | orchestrator | 05:26:12.571 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.571484 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.571509 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.571523 | orchestrator | 05:26:12.571 STDOUT terraform:  } 2025-09-29 05:26:12.571592 | orchestrator | 05:26:12.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-29 05:26:12.571638 | orchestrator | 05:26:12.571 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.571705 | orchestrator | 05:26:12.571 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.571723 | orchestrator | 05:26:12.571 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.572232 | orchestrator | 05:26:12.571 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.572249 | orchestrator | 05:26:12.571 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.572253 | orchestrator | 05:26:12.571 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.572257 | orchestrator | 05:26:12.571 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-29 05:26:12.572261 | orchestrator | 05:26:12.571 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.572265 | orchestrator | 05:26:12.571 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.572269 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.572274 | orchestrator | 05:26:12.571 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.572278 | orchestrator | 05:26:12.571 STDOUT terraform:  } 2025-09-29 05:26:12.572282 | orchestrator | 05:26:12.571 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-29 05:26:12.572291 | orchestrator | 05:26:12.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-29 05:26:12.572296 | orchestrator | 05:26:12.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.572300 | orchestrator | 05:26:12.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.572304 | orchestrator | 05:26:12.572 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.572307 | orchestrator | 05:26:12.572 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.572311 | orchestrator | 05:26:12.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.572318 | orchestrator | 05:26:12.572 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-29 05:26:12.572322 | orchestrator | 05:26:12.572 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.572326 | orchestrator | 05:26:12.572 STDOUT terraform:  + size = 80 2025-09-29 05:26:12.572329 | orchestrator | 05:26:12.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.572333 | orchestrator | 05:26:12.572 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.572338 | orchestrator | 05:26:12.572 STDOUT terraform:  } 2025-09-29 05:26:12.572371 | orchestrator | 05:26:12.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-29 05:26:12.572413 | orchestrator | 05:26:12.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.572453 | orchestrator | 05:26:12.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.572477 | orchestrator | 05:26:12.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.572512 | orchestrator | 05:26:12.572 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.572545 | orchestrator | 05:26:12.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.572585 | orchestrator | 05:26:12.572 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-29 05:26:12.572620 | orchestrator | 05:26:12.572 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.572678 | orchestrator | 05:26:12.572 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.572685 | orchestrator | 05:26:12.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.572690 | orchestrator | 05:26:12.572 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.572707 | orchestrator | 05:26:12.572 STDOUT terraform:  } 2025-09-29 05:26:12.572750 | orchestrator | 05:26:12.572 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-29 05:26:12.572790 | orchestrator | 05:26:12.572 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.572824 | orchestrator | 05:26:12.572 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.572847 | orchestrator | 05:26:12.572 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.572892 | orchestrator | 05:26:12.572 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.573713 | orchestrator | 05:26:12.572 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.573725 | orchestrator | 05:26:12.572 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-29 05:26:12.573729 | orchestrator | 05:26:12.572 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.573734 | orchestrator | 05:26:12.572 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.573738 | orchestrator | 05:26:12.572 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.573743 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.573747 | orchestrator | 05:26:12.573 STDOUT terraform:  } 2025-09-29 05:26:12.573752 | orchestrator | 05:26:12.573 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-29 05:26:12.573756 | orchestrator | 05:26:12.573 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.573762 | orchestrator | 05:26:12.573 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.573766 | orchestrator | 05:26:12.573 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.573770 | orchestrator | 05:26:12.573 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.573775 | orchestrator | 05:26:12.573 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.573779 | orchestrator | 05:26:12.573 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-29 05:26:12.573784 | orchestrator | 05:26:12.573 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.573788 | orchestrator | 05:26:12.573 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.573792 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.573797 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.573801 | orchestrator | 05:26:12.573 STDOUT terraform:  } 2025-09-29 05:26:12.573805 | orchestrator | 05:26:12.573 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-29 05:26:12.573817 | orchestrator | 05:26:12.573 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.573821 | orchestrator | 05:26:12.573 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.573825 | orchestrator | 05:26:12.573 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.573830 | orchestrator | 05:26:12.573 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.573834 | orchestrator | 05:26:12.573 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.573838 | orchestrator | 05:26:12.573 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-29 05:26:12.573843 | orchestrator | 05:26:12.573 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.573847 | orchestrator | 05:26:12.573 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.573851 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.573855 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.573860 | orchestrator | 05:26:12.573 STDOUT terraform:  } 2025-09-29 05:26:12.573869 | orchestrator | 05:26:12.573 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-29 05:26:12.573874 | orchestrator | 05:26:12.573 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.573878 | orchestrator | 05:26:12.573 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.573882 | orchestrator | 05:26:12.573 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.573887 | orchestrator | 05:26:12.573 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.573892 | orchestrator | 05:26:12.573 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.573898 | orchestrator | 05:26:12.573 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-29 05:26:12.573923 | orchestrator | 05:26:12.573 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.573943 | orchestrator | 05:26:12.573 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.573967 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.573990 | orchestrator | 05:26:12.573 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.573996 | orchestrator | 05:26:12.573 STDOUT terraform:  } 2025-09-29 05:26:12.582141 | orchestrator | 05:26:12.573 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-29 05:26:12.582177 | orchestrator | 05:26:12.582 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.582199 | orchestrator | 05:26:12.582 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.582226 | orchestrator | 05:26:12.582 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.582260 | orchestrator | 05:26:12.582 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.582297 | orchestrator | 05:26:12.582 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.582335 | orchestrator | 05:26:12.582 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-29 05:26:12.582374 | orchestrator | 05:26:12.582 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.582395 | orchestrator | 05:26:12.582 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.582421 | orchestrator | 05:26:12.582 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.582446 | orchestrator | 05:26:12.582 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.582452 | orchestrator | 05:26:12.582 STDOUT terraform:  } 2025-09-29 05:26:12.582496 | orchestrator | 05:26:12.582 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-29 05:26:12.582539 | orchestrator | 05:26:12.582 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.582573 | orchestrator | 05:26:12.582 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.582597 | orchestrator | 05:26:12.582 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.582635 | orchestrator | 05:26:12.582 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.582724 | orchestrator | 05:26:12.582 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.582762 | orchestrator | 05:26:12.582 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-29 05:26:12.582796 | orchestrator | 05:26:12.582 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.582817 | orchestrator | 05:26:12.582 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.582840 | orchestrator | 05:26:12.582 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.582864 | orchestrator | 05:26:12.582 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.582878 | orchestrator | 05:26:12.582 STDOUT terraform:  } 2025-09-29 05:26:12.582926 | orchestrator | 05:26:12.582 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-29 05:26:12.582964 | orchestrator | 05:26:12.582 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.582999 | orchestrator | 05:26:12.582 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.583023 | orchestrator | 05:26:12.582 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.583058 | orchestrator | 05:26:12.583 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.583093 | orchestrator | 05:26:12.583 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.583131 | orchestrator | 05:26:12.583 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-29 05:26:12.583166 | orchestrator | 05:26:12.583 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.583187 | orchestrator | 05:26:12.583 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.583213 | orchestrator | 05:26:12.583 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.583238 | orchestrator | 05:26:12.583 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.583252 | orchestrator | 05:26:12.583 STDOUT terraform:  } 2025-09-29 05:26:12.583293 | orchestrator | 05:26:12.583 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-29 05:26:12.583336 | orchestrator | 05:26:12.583 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-29 05:26:12.583370 | orchestrator | 05:26:12.583 STDOUT terraform:  + attachment = (known after apply) 2025-09-29 05:26:12.583393 | orchestrator | 05:26:12.583 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.583427 | orchestrator | 05:26:12.583 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.583462 | orchestrator | 05:26:12.583 STDOUT terraform:  + metadata = (known after apply) 2025-09-29 05:26:12.583499 | orchestrator | 05:26:12.583 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-29 05:26:12.583534 | orchestrator | 05:26:12.583 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.583555 | orchestrator | 05:26:12.583 STDOUT terraform:  + size = 20 2025-09-29 05:26:12.583577 | orchestrator | 05:26:12.583 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-29 05:26:12.583600 | orchestrator | 05:26:12.583 STDOUT terraform:  + volume_type = "ssd" 2025-09-29 05:26:12.583620 | orchestrator | 05:26:12.583 STDOUT terraform:  } 2025-09-29 05:26:12.583675 | orchestrator | 05:26:12.583 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-29 05:26:12.583716 | orchestrator | 05:26:12.583 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-29 05:26:12.583749 | orchestrator | 05:26:12.583 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.583782 | orchestrator | 05:26:12.583 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.583819 | orchestrator | 05:26:12.583 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.583850 | orchestrator | 05:26:12.583 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.583873 | orchestrator | 05:26:12.583 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.583893 | orchestrator | 05:26:12.583 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.583927 | orchestrator | 05:26:12.583 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.583963 | orchestrator | 05:26:12.583 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.583992 | orchestrator | 05:26:12.583 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-29 05:26:12.584015 | orchestrator | 05:26:12.583 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.584050 | orchestrator | 05:26:12.584 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.584088 | orchestrator | 05:26:12.584 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.584122 | orchestrator | 05:26:12.584 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.584169 | orchestrator | 05:26:12.584 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.584203 | orchestrator | 05:26:12.584 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.584230 | orchestrator | 05:26:12.584 STDOUT terraform:  + name = "testbed-manager" 2025-09-29 05:26:12.584254 | orchestrator | 05:26:12.584 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.584288 | orchestrator | 05:26:12.584 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.584321 | orchestrator | 05:26:12.584 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.584343 | orchestrator | 05:26:12.584 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.584376 | orchestrator | 05:26:12.584 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.584412 | orchestrator | 05:26:12.584 STDOUT terraform:  + user_data = (sensitive value) 2025-09-29 05:26:12.584435 | orchestrator | 05:26:12.584 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.584459 | orchestrator | 05:26:12.584 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.584485 | orchestrator | 05:26:12.584 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.584515 | orchestrator | 05:26:12.584 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.584541 | orchestrator | 05:26:12.584 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.584574 | orchestrator | 05:26:12.584 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.584614 | orchestrator | 05:26:12.584 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.584627 | orchestrator | 05:26:12.584 STDOUT terraform:  } 2025-09-29 05:26:12.584657 | orchestrator | 05:26:12.584 STDOUT terraform:  + network { 2025-09-29 05:26:12.584685 | orchestrator | 05:26:12.584 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.584715 | orchestrator | 05:26:12.584 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.584744 | orchestrator | 05:26:12.584 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.584781 | orchestrator | 05:26:12.584 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.584814 | orchestrator | 05:26:12.584 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.584844 | orchestrator | 05:26:12.584 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.584873 | orchestrator | 05:26:12.584 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.584887 | orchestrator | 05:26:12.584 STDOUT terraform:  } 2025-09-29 05:26:12.584911 | orchestrator | 05:26:12.584 STDOUT terraform:  } 2025-09-29 05:26:12.584959 | orchestrator | 05:26:12.584 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-29 05:26:12.585002 | orchestrator | 05:26:12.584 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.585035 | orchestrator | 05:26:12.584 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.585068 | orchestrator | 05:26:12.585 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.585108 | orchestrator | 05:26:12.585 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.585148 | orchestrator | 05:26:12.585 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.585171 | orchestrator | 05:26:12.585 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.585191 | orchestrator | 05:26:12.585 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.585230 | orchestrator | 05:26:12.585 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.585264 | orchestrator | 05:26:12.585 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.585293 | orchestrator | 05:26:12.585 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.585317 | orchestrator | 05:26:12.585 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.585349 | orchestrator | 05:26:12.585 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.585384 | orchestrator | 05:26:12.585 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.585418 | orchestrator | 05:26:12.585 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.585452 | orchestrator | 05:26:12.585 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.585476 | orchestrator | 05:26:12.585 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.585507 | orchestrator | 05:26:12.585 STDOUT terraform:  + name = "testbed-node-0" 2025-09-29 05:26:12.585529 | orchestrator | 05:26:12.585 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.585563 | orchestrator | 05:26:12.585 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.585599 | orchestrator | 05:26:12.585 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.585618 | orchestrator | 05:26:12.585 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.585664 | orchestrator | 05:26:12.585 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.585711 | orchestrator | 05:26:12.585 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.585728 | orchestrator | 05:26:12.585 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.585752 | orchestrator | 05:26:12.585 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.585778 | orchestrator | 05:26:12.585 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.585808 | orchestrator | 05:26:12.585 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.585835 | orchestrator | 05:26:12.585 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.585864 | orchestrator | 05:26:12.585 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.585900 | orchestrator | 05:26:12.585 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.585914 | orchestrator | 05:26:12.585 STDOUT terraform:  } 2025-09-29 05:26:12.585928 | orchestrator | 05:26:12.585 STDOUT terraform:  + network { 2025-09-29 05:26:12.585948 | orchestrator | 05:26:12.585 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.585978 | orchestrator | 05:26:12.585 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.586007 | orchestrator | 05:26:12.585 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.589738 | orchestrator | 05:26:12.586 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.589813 | orchestrator | 05:26:12.589 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.589865 | orchestrator | 05:26:12.589 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.589918 | orchestrator | 05:26:12.589 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.589951 | orchestrator | 05:26:12.589 STDOUT terraform:  } 2025-09-29 05:26:12.589958 | orchestrator | 05:26:12.589 STDOUT terraform:  } 2025-09-29 05:26:12.590055 | orchestrator | 05:26:12.589 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-29 05:26:12.590125 | orchestrator | 05:26:12.590 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.590183 | orchestrator | 05:26:12.590 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.590241 | orchestrator | 05:26:12.590 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.590306 | orchestrator | 05:26:12.590 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.590365 | orchestrator | 05:26:12.590 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.590406 | orchestrator | 05:26:12.590 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.590439 | orchestrator | 05:26:12.590 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.590497 | orchestrator | 05:26:12.590 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.590556 | orchestrator | 05:26:12.590 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.590605 | orchestrator | 05:26:12.590 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.590693 | orchestrator | 05:26:12.590 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.590748 | orchestrator | 05:26:12.590 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.590810 | orchestrator | 05:26:12.590 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.590870 | orchestrator | 05:26:12.590 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.590931 | orchestrator | 05:26:12.590 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.590973 | orchestrator | 05:26:12.590 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.591024 | orchestrator | 05:26:12.590 STDOUT terraform:  + name = "testbed-node-1" 2025-09-29 05:26:12.591065 | orchestrator | 05:26:12.591 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.591123 | orchestrator | 05:26:12.591 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.591178 | orchestrator | 05:26:12.591 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.591217 | orchestrator | 05:26:12.591 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.591272 | orchestrator | 05:26:12.591 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.591350 | orchestrator | 05:26:12.591 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.591376 | orchestrator | 05:26:12.591 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.591413 | orchestrator | 05:26:12.591 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.591456 | orchestrator | 05:26:12.591 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.591504 | orchestrator | 05:26:12.591 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.591552 | orchestrator | 05:26:12.591 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.591594 | orchestrator | 05:26:12.591 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.591668 | orchestrator | 05:26:12.591 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.591686 | orchestrator | 05:26:12.591 STDOUT terraform:  } 2025-09-29 05:26:12.591717 | orchestrator | 05:26:12.591 STDOUT terraform:  + network { 2025-09-29 05:26:12.591742 | orchestrator | 05:26:12.591 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.591789 | orchestrator | 05:26:12.591 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.591836 | orchestrator | 05:26:12.591 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.592193 | orchestrator | 05:26:12.591 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.592199 | orchestrator | 05:26:12.591 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.592205 | orchestrator | 05:26:12.591 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.592209 | orchestrator | 05:26:12.592 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.592213 | orchestrator | 05:26:12.592 STDOUT terraform:  } 2025-09-29 05:26:12.592217 | orchestrator | 05:26:12.592 STDOUT terraform:  } 2025-09-29 05:26:12.592221 | orchestrator | 05:26:12.592 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-29 05:26:12.592460 | orchestrator | 05:26:12.592 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.592465 | orchestrator | 05:26:12.592 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.592469 | orchestrator | 05:26:12.592 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.592473 | orchestrator | 05:26:12.592 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.592477 | orchestrator | 05:26:12.592 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.592482 | orchestrator | 05:26:12.592 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.592487 | orchestrator | 05:26:12.592 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.592733 | orchestrator | 05:26:12.592 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.592739 | orchestrator | 05:26:12.592 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.592743 | orchestrator | 05:26:12.592 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.592750 | orchestrator | 05:26:12.592 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.592754 | orchestrator | 05:26:12.592 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.592994 | orchestrator | 05:26:12.592 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.592998 | orchestrator | 05:26:12.592 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.593002 | orchestrator | 05:26:12.592 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.593006 | orchestrator | 05:26:12.592 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.593010 | orchestrator | 05:26:12.592 STDOUT terraform:  + name = "testbed-node-2" 2025-09-29 05:26:12.593014 | orchestrator | 05:26:12.592 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.593253 | orchestrator | 05:26:12.592 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.593258 | orchestrator | 05:26:12.593 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.593261 | orchestrator | 05:26:12.593 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.593265 | orchestrator | 05:26:12.593 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.593269 | orchestrator | 05:26:12.593 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.593273 | orchestrator | 05:26:12.593 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.593278 | orchestrator | 05:26:12.593 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.593518 | orchestrator | 05:26:12.593 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.593522 | orchestrator | 05:26:12.593 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.593526 | orchestrator | 05:26:12.593 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.593530 | orchestrator | 05:26:12.593 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.593534 | orchestrator | 05:26:12.593 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.593538 | orchestrator | 05:26:12.593 STDOUT terraform:  } 2025-09-29 05:26:12.593543 | orchestrator | 05:26:12.593 STDOUT terraform:  + network { 2025-09-29 05:26:12.593548 | orchestrator | 05:26:12.593 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.593788 | orchestrator | 05:26:12.593 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.593793 | orchestrator | 05:26:12.593 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.593797 | orchestrator | 05:26:12.593 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.593801 | orchestrator | 05:26:12.593 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.593804 | orchestrator | 05:26:12.593 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.594026 | orchestrator | 05:26:12.593 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.594032 | orchestrator | 05:26:12.593 STDOUT terraform:  } 2025-09-29 05:26:12.594036 | orchestrator | 05:26:12.593 STDOUT terraform:  } 2025-09-29 05:26:12.594042 | orchestrator | 05:26:12.593 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-29 05:26:12.594046 | orchestrator | 05:26:12.593 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.594052 | orchestrator | 05:26:12.593 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.594369 | orchestrator | 05:26:12.594 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.594374 | orchestrator | 05:26:12.594 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.601674 | orchestrator | 05:26:12.599 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.601697 | orchestrator | 05:26:12.600 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.601702 | orchestrator | 05:26:12.600 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.601706 | orchestrator | 05:26:12.600 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.601709 | orchestrator | 05:26:12.600 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.601713 | orchestrator | 05:26:12.600 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.601717 | orchestrator | 05:26:12.600 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.601721 | orchestrator | 05:26:12.600 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.601724 | orchestrator | 05:26:12.600 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.601728 | orchestrator | 05:26:12.600 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.601732 | orchestrator | 05:26:12.600 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.601735 | orchestrator | 05:26:12.600 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.601739 | orchestrator | 05:26:12.600 STDOUT terraform:  + name = "testbed-node-3" 2025-09-29 05:26:12.601743 | orchestrator | 05:26:12.600 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.601746 | orchestrator | 05:26:12.600 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.601750 | orchestrator | 05:26:12.600 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.601754 | orchestrator | 05:26:12.600 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.601758 | orchestrator | 05:26:12.600 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.601761 | orchestrator | 05:26:12.600 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.601765 | orchestrator | 05:26:12.600 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.601769 | orchestrator | 05:26:12.600 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.601773 | orchestrator | 05:26:12.600 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.601777 | orchestrator | 05:26:12.600 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.601780 | orchestrator | 05:26:12.600 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.601791 | orchestrator | 05:26:12.600 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.601797 | orchestrator | 05:26:12.600 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.601801 | orchestrator | 05:26:12.600 STDOUT terraform:  } 2025-09-29 05:26:12.601805 | orchestrator | 05:26:12.600 STDOUT terraform:  + network { 2025-09-29 05:26:12.601809 | orchestrator | 05:26:12.600 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.601812 | orchestrator | 05:26:12.600 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.601816 | orchestrator | 05:26:12.600 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.601820 | orchestrator | 05:26:12.601 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.601823 | orchestrator | 05:26:12.601 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.601827 | orchestrator | 05:26:12.601 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.601831 | orchestrator | 05:26:12.601 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.601834 | orchestrator | 05:26:12.601 STDOUT terraform:  } 2025-09-29 05:26:12.601838 | orchestrator | 05:26:12.601 STDOUT terraform:  } 2025-09-29 05:26:12.601842 | orchestrator | 05:26:12.601 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-29 05:26:12.601854 | orchestrator | 05:26:12.601 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.601859 | orchestrator | 05:26:12.601 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.601862 | orchestrator | 05:26:12.601 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.601866 | orchestrator | 05:26:12.601 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.601870 | orchestrator | 05:26:12.601 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.601873 | orchestrator | 05:26:12.601 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.601877 | orchestrator | 05:26:12.601 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.601881 | orchestrator | 05:26:12.601 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.601884 | orchestrator | 05:26:12.601 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.601888 | orchestrator | 05:26:12.601 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.601892 | orchestrator | 05:26:12.601 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.601895 | orchestrator | 05:26:12.601 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.601899 | orchestrator | 05:26:12.601 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.601903 | orchestrator | 05:26:12.601 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.601906 | orchestrator | 05:26:12.601 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.601910 | orchestrator | 05:26:12.601 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.601917 | orchestrator | 05:26:12.601 STDOUT terraform:  + name = "testbed-node-4" 2025-09-29 05:26:12.601920 | orchestrator | 05:26:12.601 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.601926 | orchestrator | 05:26:12.601 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.601930 | orchestrator | 05:26:12.601 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.603320 | orchestrator | 05:26:12.601 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.603344 | orchestrator | 05:26:12.601 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.603349 | orchestrator | 05:26:12.602 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.603353 | orchestrator | 05:26:12.602 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.603357 | orchestrator | 05:26:12.602 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.603361 | orchestrator | 05:26:12.602 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.603365 | orchestrator | 05:26:12.602 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.603375 | orchestrator | 05:26:12.602 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.603379 | orchestrator | 05:26:12.602 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.603383 | orchestrator | 05:26:12.602 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.603387 | orchestrator | 05:26:12.602 STDOUT terraform:  } 2025-09-29 05:26:12.603391 | orchestrator | 05:26:12.602 STDOUT terraform:  + network { 2025-09-29 05:26:12.603395 | orchestrator | 05:26:12.602 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.603399 | orchestrator | 05:26:12.602 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.603403 | orchestrator | 05:26:12.602 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.603407 | orchestrator | 05:26:12.602 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.603411 | orchestrator | 05:26:12.602 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.603415 | orchestrator | 05:26:12.602 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.603418 | orchestrator | 05:26:12.602 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.603422 | orchestrator | 05:26:12.602 STDOUT terraform:  } 2025-09-29 05:26:12.603426 | orchestrator | 05:26:12.602 STDOUT terraform:  } 2025-09-29 05:26:12.603430 | orchestrator | 05:26:12.602 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-29 05:26:12.603433 | orchestrator | 05:26:12.602 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-29 05:26:12.603437 | orchestrator | 05:26:12.602 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-29 05:26:12.603441 | orchestrator | 05:26:12.602 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-29 05:26:12.603445 | orchestrator | 05:26:12.602 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-29 05:26:12.603456 | orchestrator | 05:26:12.602 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.603460 | orchestrator | 05:26:12.602 STDOUT terraform:  + availability_zone = "nova" 2025-09-29 05:26:12.603464 | orchestrator | 05:26:12.602 STDOUT terraform:  + config_drive = true 2025-09-29 05:26:12.603468 | orchestrator | 05:26:12.602 STDOUT terraform:  + created = (known after apply) 2025-09-29 05:26:12.603472 | orchestrator | 05:26:12.603 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-29 05:26:12.603475 | orchestrator | 05:26:12.603 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-29 05:26:12.603479 | orchestrator | 05:26:12.603 STDOUT terraform:  + force_delete = false 2025-09-29 05:26:12.603483 | orchestrator | 05:26:12.603 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-29 05:26:12.603487 | orchestrator | 05:26:12.603 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.603490 | orchestrator | 05:26:12.603 STDOUT terraform:  + image_id = (known after apply) 2025-09-29 05:26:12.603494 | orchestrator | 05:26:12.603 STDOUT terraform:  + image_name = (known after apply) 2025-09-29 05:26:12.603498 | orchestrator | 05:26:12.603 STDOUT terraform:  + key_pair = "testbed" 2025-09-29 05:26:12.603505 | orchestrator | 05:26:12.603 STDOUT terraform:  + name = "testbed-node-5" 2025-09-29 05:26:12.603509 | orchestrator | 05:26:12.603 STDOUT terraform:  + power_state = "active" 2025-09-29 05:26:12.603513 | orchestrator | 05:26:12.603 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.603517 | orchestrator | 05:26:12.603 STDOUT terraform:  + security_groups = (known after apply) 2025-09-29 05:26:12.603521 | orchestrator | 05:26:12.603 STDOUT terraform:  + stop_before_destroy = false 2025-09-29 05:26:12.603525 | orchestrator | 05:26:12.603 STDOUT terraform:  + updated = (known after apply) 2025-09-29 05:26:12.603530 | orchestrator | 05:26:12.603 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-29 05:26:12.603535 | orchestrator | 05:26:12.603 STDOUT terraform:  + block_device { 2025-09-29 05:26:12.604233 | orchestrator | 05:26:12.603 STDOUT terraform:  + boot_index = 0 2025-09-29 05:26:12.604250 | orchestrator | 05:26:12.603 STDOUT terraform:  + delete_on_termination = false 2025-09-29 05:26:12.604255 | orchestrator | 05:26:12.603 STDOUT terraform:  + destination_type = "volume" 2025-09-29 05:26:12.604258 | orchestrator | 05:26:12.603 STDOUT terraform:  + multiattach = false 2025-09-29 05:26:12.604263 | orchestrator | 05:26:12.603 STDOUT terraform:  + source_type = "volume" 2025-09-29 05:26:12.604266 | orchestrator | 05:26:12.603 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.604270 | orchestrator | 05:26:12.603 STDOUT terraform:  } 2025-09-29 05:26:12.604274 | orchestrator | 05:26:12.603 STDOUT terraform:  + network { 2025-09-29 05:26:12.604278 | orchestrator | 05:26:12.603 STDOUT terraform:  + access_network = false 2025-09-29 05:26:12.604282 | orchestrator | 05:26:12.603 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-29 05:26:12.604285 | orchestrator | 05:26:12.603 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-29 05:26:12.604297 | orchestrator | 05:26:12.603 STDOUT terraform:  + mac = (known after apply) 2025-09-29 05:26:12.604301 | orchestrator | 05:26:12.603 STDOUT terraform:  + name = (known after apply) 2025-09-29 05:26:12.604305 | orchestrator | 05:26:12.603 STDOUT terraform:  + port = (known after apply) 2025-09-29 05:26:12.604309 | orchestrator | 05:26:12.603 STDOUT terraform:  + uuid = (known after apply) 2025-09-29 05:26:12.604313 | orchestrator | 05:26:12.603 STDOUT terraform:  } 2025-09-29 05:26:12.604316 | orchestrator | 05:26:12.603 STDOUT terraform:  } 2025-09-29 05:26:12.604320 | orchestrator | 05:26:12.603 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-29 05:26:12.604324 | orchestrator | 05:26:12.604 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-29 05:26:12.604328 | orchestrator | 05:26:12.604 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-29 05:26:12.604331 | orchestrator | 05:26:12.604 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.604335 | orchestrator | 05:26:12.604 STDOUT terraform:  + name = "testbed" 2025-09-29 05:26:12.604339 | orchestrator | 05:26:12.604 STDOUT terraform:  + private_key = (sensitive value) 2025-09-29 05:26:12.604343 | orchestrator | 05:26:12.604 STDOUT terraform:  + public_key = (known after apply) 2025-09-29 05:26:12.604349 | orchestrator | 05:26:12.604 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.604353 | orchestrator | 05:26:12.604 STDOUT terraform:  + user_id = (known after apply) 2025-09-29 05:26:12.604357 | orchestrator | 05:26:12.604 STDOUT terraform:  } 2025-09-29 05:26:12.604361 | orchestrator | 05:26:12.604 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-29 05:26:12.604367 | orchestrator | 05:26:12.604 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.604449 | orchestrator | 05:26:12.604 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.604455 | orchestrator | 05:26:12.604 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.604460 | orchestrator | 05:26:12.604 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.604492 | orchestrator | 05:26:12.604 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.604527 | orchestrator | 05:26:12.604 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.604543 | orchestrator | 05:26:12.604 STDOUT terraform:  } 2025-09-29 05:26:12.604618 | orchestrator | 05:26:12.604 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-29 05:26:12.604691 | orchestrator | 05:26:12.604 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.604716 | orchestrator | 05:26:12.604 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.604750 | orchestrator | 05:26:12.604 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.604783 | orchestrator | 05:26:12.604 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.604816 | orchestrator | 05:26:12.604 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.604849 | orchestrator | 05:26:12.604 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.604865 | orchestrator | 05:26:12.604 STDOUT terraform:  } 2025-09-29 05:26:12.604930 | orchestrator | 05:26:12.604 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-29 05:26:12.605001 | orchestrator | 05:26:12.604 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.605035 | orchestrator | 05:26:12.604 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.605068 | orchestrator | 05:26:12.605 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.605100 | orchestrator | 05:26:12.605 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.605134 | orchestrator | 05:26:12.605 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.605165 | orchestrator | 05:26:12.605 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.605181 | orchestrator | 05:26:12.605 STDOUT terraform:  } 2025-09-29 05:26:12.605239 | orchestrator | 05:26:12.605 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-29 05:26:12.605294 | orchestrator | 05:26:12.605 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.605328 | orchestrator | 05:26:12.605 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.605361 | orchestrator | 05:26:12.605 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.605394 | orchestrator | 05:26:12.605 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.605426 | orchestrator | 05:26:12.605 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.605459 | orchestrator | 05:26:12.605 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.605475 | orchestrator | 05:26:12.605 STDOUT terraform:  } 2025-09-29 05:26:12.605532 | orchestrator | 05:26:12.605 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-29 05:26:12.605588 | orchestrator | 05:26:12.605 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.605622 | orchestrator | 05:26:12.605 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.605684 | orchestrator | 05:26:12.605 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.605717 | orchestrator | 05:26:12.605 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.605749 | orchestrator | 05:26:12.605 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.605784 | orchestrator | 05:26:12.605 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.605799 | orchestrator | 05:26:12.605 STDOUT terraform:  } 2025-09-29 05:26:12.605856 | orchestrator | 05:26:12.605 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-29 05:26:12.605912 | orchestrator | 05:26:12.605 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.605946 | orchestrator | 05:26:12.605 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.605979 | orchestrator | 05:26:12.605 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.606011 | orchestrator | 05:26:12.605 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.606062 | orchestrator | 05:26:12.606 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.606090 | orchestrator | 05:26:12.606 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.606105 | orchestrator | 05:26:12.606 STDOUT terraform:  } 2025-09-29 05:26:12.606159 | orchestrator | 05:26:12.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-29 05:26:12.606220 | orchestrator | 05:26:12.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.606250 | orchestrator | 05:26:12.606 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.606280 | orchestrator | 05:26:12.606 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.606309 | orchestrator | 05:26:12.606 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.606340 | orchestrator | 05:26:12.606 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.606369 | orchestrator | 05:26:12.606 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.606383 | orchestrator | 05:26:12.606 STDOUT terraform:  } 2025-09-29 05:26:12.606436 | orchestrator | 05:26:12.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-29 05:26:12.606488 | orchestrator | 05:26:12.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.606517 | orchestrator | 05:26:12.606 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.606548 | orchestrator | 05:26:12.606 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.606578 | orchestrator | 05:26:12.606 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.606607 | orchestrator | 05:26:12.606 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.606637 | orchestrator | 05:26:12.606 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.606654 | orchestrator | 05:26:12.606 STDOUT terraform:  } 2025-09-29 05:26:12.606708 | orchestrator | 05:26:12.606 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-29 05:26:12.606777 | orchestrator | 05:26:12.606 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-29 05:26:12.606834 | orchestrator | 05:26:12.606 STDOUT terraform:  + device = (known after apply) 2025-09-29 05:26:12.606865 | orchestrator | 05:26:12.606 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.606938 | orchestrator | 05:26:12.606 STDOUT terraform:  + instance_id = (known after apply) 2025-09-29 05:26:12.606945 | orchestrator | 05:26:12.606 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.606950 | orchestrator | 05:26:12.606 STDOUT terraform:  + volume_id = (known after apply) 2025-09-29 05:26:12.606956 | orchestrator | 05:26:12.606 STDOUT terraform:  } 2025-09-29 05:26:12.607018 | orchestrator | 05:26:12.606 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-29 05:26:12.607081 | orchestrator | 05:26:12.607 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-29 05:26:12.607112 | orchestrator | 05:26:12.607 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-29 05:26:12.607205 | orchestrator | 05:26:12.607 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-29 05:26:12.607234 | orchestrator | 05:26:12.607 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.610500 | orchestrator | 05:26:12.607 STDOUT terraform:  + port_id = (known after apply) 2025-09-29 05:26:12.610541 | orchestrator | 05:26:12.610 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.610560 | orchestrator | 05:26:12.610 STDOUT terraform:  } 2025-09-29 05:26:12.610624 | orchestrator | 05:26:12.610 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-29 05:26:12.610690 | orchestrator | 05:26:12.610 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-29 05:26:12.610720 | orchestrator | 05:26:12.610 STDOUT terraform:  + address = (known after apply) 2025-09-29 05:26:12.610748 | orchestrator | 05:26:12.610 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.610775 | orchestrator | 05:26:12.610 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-29 05:26:12.610803 | orchestrator | 05:26:12.610 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.610829 | orchestrator | 05:26:12.610 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-29 05:26:12.610855 | orchestrator | 05:26:12.610 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.610886 | orchestrator | 05:26:12.610 STDOUT terraform:  + pool = "public" 2025-09-29 05:26:12.610894 | orchestrator | 05:26:12.610 STDOUT terraform:  + port_id = (known after apply) 2025-09-29 05:26:12.610927 | orchestrator | 05:26:12.610 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.610955 | orchestrator | 05:26:12.610 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.610982 | orchestrator | 05:26:12.610 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.610990 | orchestrator | 05:26:12.610 STDOUT terraform:  } 2025-09-29 05:26:12.611035 | orchestrator | 05:26:12.610 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-29 05:26:12.611081 | orchestrator | 05:26:12.611 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-29 05:26:12.611120 | orchestrator | 05:26:12.611 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.611157 | orchestrator | 05:26:12.611 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.611180 | orchestrator | 05:26:12.611 STDOUT terraform:  + availability_zone_hints = [ 2025-09-29 05:26:12.611187 | orchestrator | 05:26:12.611 STDOUT terraform:  + "nova", 2025-09-29 05:26:12.611207 | orchestrator | 05:26:12.611 STDOUT terraform:  ] 2025-09-29 05:26:12.611247 | orchestrator | 05:26:12.611 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-29 05:26:12.611284 | orchestrator | 05:26:12.611 STDOUT terraform:  + external = (known after apply) 2025-09-29 05:26:12.611322 | orchestrator | 05:26:12.611 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.611360 | orchestrator | 05:26:12.611 STDOUT terraform:  + mtu = (known after apply) 2025-09-29 05:26:12.611397 | orchestrator | 05:26:12.611 STDOUT terraform:  + name = "net-testbed-management" 2025-09-29 05:26:12.611435 | orchestrator | 05:26:12.611 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.611471 | orchestrator | 05:26:12.611 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.611507 | orchestrator | 05:26:12.611 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.611543 | orchestrator | 05:26:12.611 STDOUT terraform:  + shared = (known after apply) 2025-09-29 05:26:12.611580 | orchestrator | 05:26:12.611 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.611615 | orchestrator | 05:26:12.611 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-29 05:26:12.611674 | orchestrator | 05:26:12.611 STDOUT terraform:  + segments (known after apply) 2025-09-29 05:26:12.611692 | orchestrator | 05:26:12.611 STDOUT terraform:  } 2025-09-29 05:26:12.611729 | orchestrator | 05:26:12.611 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-29 05:26:12.611794 | orchestrator | 05:26:12.611 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-29 05:26:12.611874 | orchestrator | 05:26:12.611 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.611913 | orchestrator | 05:26:12.611 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.611963 | orchestrator | 05:26:12.611 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.612003 | orchestrator | 05:26:12.611 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.612035 | orchestrator | 05:26:12.611 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.613132 | orchestrator | 05:26:12.612 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.613168 | orchestrator | 05:26:12.613 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.613207 | orchestrator | 05:26:12.613 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.613243 | orchestrator | 05:26:12.613 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.613283 | orchestrator | 05:26:12.613 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.613320 | orchestrator | 05:26:12.613 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.613356 | orchestrator | 05:26:12.613 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.613393 | orchestrator | 05:26:12.613 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.613430 | orchestrator | 05:26:12.613 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.613466 | orchestrator | 05:26:12.613 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.613502 | orchestrator | 05:26:12.613 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.613525 | orchestrator | 05:26:12.613 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.613560 | orchestrator | 05:26:12.613 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.613567 | orchestrator | 05:26:12.613 STDOUT terraform:  } 2025-09-29 05:26:12.613594 | orchestrator | 05:26:12.613 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.613601 | orchestrator | 05:26:12.613 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.613632 | orchestrator | 05:26:12.613 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-29 05:26:12.613671 | orchestrator | 05:26:12.613 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.613678 | orchestrator | 05:26:12.613 STDOUT terraform:  } 2025-09-29 05:26:12.613695 | orchestrator | 05:26:12.613 STDOUT terraform:  } 2025-09-29 05:26:12.613742 | orchestrator | 05:26:12.613 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-29 05:26:12.613788 | orchestrator | 05:26:12.613 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.613825 | orchestrator | 05:26:12.613 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.613862 | orchestrator | 05:26:12.613 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.613896 | orchestrator | 05:26:12.613 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.613932 | orchestrator | 05:26:12.613 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.613967 | orchestrator | 05:26:12.613 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.614005 | orchestrator | 05:26:12.613 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.614065 | orchestrator | 05:26:12.613 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.614100 | orchestrator | 05:26:12.614 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.614137 | orchestrator | 05:26:12.614 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.614173 | orchestrator | 05:26:12.614 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.614209 | orchestrator | 05:26:12.614 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.614245 | orchestrator | 05:26:12.614 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.614283 | orchestrator | 05:26:12.614 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.614319 | orchestrator | 05:26:12.614 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.614356 | orchestrator | 05:26:12.614 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.614394 | orchestrator | 05:26:12.614 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.614414 | orchestrator | 05:26:12.614 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.614444 | orchestrator | 05:26:12.614 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.614452 | orchestrator | 05:26:12.614 STDOUT terraform:  } 2025-09-29 05:26:12.614474 | orchestrator | 05:26:12.614 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.614504 | orchestrator | 05:26:12.614 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.614511 | orchestrator | 05:26:12.614 STDOUT terraform:  } 2025-09-29 05:26:12.614536 | orchestrator | 05:26:12.614 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.614564 | orchestrator | 05:26:12.614 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.614571 | orchestrator | 05:26:12.614 STDOUT terraform:  } 2025-09-29 05:26:12.614597 | orchestrator | 05:26:12.614 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.614604 | orchestrator | 05:26:12.614 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.614634 | orchestrator | 05:26:12.614 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-29 05:26:12.614664 | orchestrator | 05:26:12.614 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.614683 | orchestrator | 05:26:12.614 STDOUT terraform:  } 2025-09-29 05:26:12.614690 | orchestrator | 05:26:12.614 STDOUT terraform:  } 2025-09-29 05:26:12.614738 | orchestrator | 05:26:12.614 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-29 05:26:12.614785 | orchestrator | 05:26:12.614 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.614821 | orchestrator | 05:26:12.614 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.614855 | orchestrator | 05:26:12.614 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.614891 | orchestrator | 05:26:12.614 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.614928 | orchestrator | 05:26:12.614 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.614969 | orchestrator | 05:26:12.614 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.615002 | orchestrator | 05:26:12.614 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.615037 | orchestrator | 05:26:12.614 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.615073 | orchestrator | 05:26:12.615 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.615109 | orchestrator | 05:26:12.615 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.615157 | orchestrator | 05:26:12.615 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.615191 | orchestrator | 05:26:12.615 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.615227 | orchestrator | 05:26:12.615 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.615262 | orchestrator | 05:26:12.615 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.615298 | orchestrator | 05:26:12.615 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.615333 | orchestrator | 05:26:12.615 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.615370 | orchestrator | 05:26:12.615 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.615390 | orchestrator | 05:26:12.615 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.615420 | orchestrator | 05:26:12.615 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.615428 | orchestrator | 05:26:12.615 STDOUT terraform:  } 2025-09-29 05:26:12.615467 | orchestrator | 05:26:12.615 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.615497 | orchestrator | 05:26:12.615 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.615505 | orchestrator | 05:26:12.615 STDOUT terraform:  } 2025-09-29 05:26:12.615525 | orchestrator | 05:26:12.615 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.615553 | orchestrator | 05:26:12.615 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.615560 | orchestrator | 05:26:12.615 STDOUT terraform:  } 2025-09-29 05:26:12.615587 | orchestrator | 05:26:12.615 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.615594 | orchestrator | 05:26:12.615 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.615623 | orchestrator | 05:26:12.615 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-29 05:26:12.615664 | orchestrator | 05:26:12.615 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.615671 | orchestrator | 05:26:12.615 STDOUT terraform:  } 2025-09-29 05:26:12.615693 | orchestrator | 05:26:12.615 STDOUT terraform:  } 2025-09-29 05:26:12.615741 | orchestrator | 05:26:12.615 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-29 05:26:12.615784 | orchestrator | 05:26:12.615 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.615821 | orchestrator | 05:26:12.615 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.615857 | orchestrator | 05:26:12.615 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.615891 | orchestrator | 05:26:12.615 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.615929 | orchestrator | 05:26:12.615 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.615977 | orchestrator | 05:26:12.615 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.616013 | orchestrator | 05:26:12.615 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.616048 | orchestrator | 05:26:12.616 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.616084 | orchestrator | 05:26:12.616 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.616126 | orchestrator | 05:26:12.616 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.616164 | orchestrator | 05:26:12.616 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.616201 | orchestrator | 05:26:12.616 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.616236 | orchestrator | 05:26:12.616 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.616272 | orchestrator | 05:26:12.616 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.616309 | orchestrator | 05:26:12.616 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.616344 | orchestrator | 05:26:12.616 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.616382 | orchestrator | 05:26:12.616 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.616403 | orchestrator | 05:26:12.616 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.616433 | orchestrator | 05:26:12.616 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.616440 | orchestrator | 05:26:12.616 STDOUT terraform:  } 2025-09-29 05:26:12.616465 | orchestrator | 05:26:12.616 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.616493 | orchestrator | 05:26:12.616 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.616501 | orchestrator | 05:26:12.616 STDOUT terraform:  } 2025-09-29 05:26:12.616522 | orchestrator | 05:26:12.616 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.616551 | orchestrator | 05:26:12.616 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.616558 | orchestrator | 05:26:12.616 STDOUT terraform:  } 2025-09-29 05:26:12.616586 | orchestrator | 05:26:12.616 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.616593 | orchestrator | 05:26:12.616 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.616621 | orchestrator | 05:26:12.616 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-29 05:26:12.616673 | orchestrator | 05:26:12.616 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.616681 | orchestrator | 05:26:12.616 STDOUT terraform:  } 2025-09-29 05:26:12.616698 | orchestrator | 05:26:12.616 STDOUT terraform:  } 2025-09-29 05:26:12.616744 | orchestrator | 05:26:12.616 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-29 05:26:12.616798 | orchestrator | 05:26:12.616 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.616834 | orchestrator | 05:26:12.616 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.616872 | orchestrator | 05:26:12.616 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.616909 | orchestrator | 05:26:12.616 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.616945 | orchestrator | 05:26:12.616 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.616980 | orchestrator | 05:26:12.616 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.617017 | orchestrator | 05:26:12.616 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.617055 | orchestrator | 05:26:12.617 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.617090 | orchestrator | 05:26:12.617 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.617126 | orchestrator | 05:26:12.617 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.617164 | orchestrator | 05:26:12.617 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.617202 | orchestrator | 05:26:12.617 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.617241 | orchestrator | 05:26:12.617 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.617271 | orchestrator | 05:26:12.617 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.617309 | orchestrator | 05:26:12.617 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.617346 | orchestrator | 05:26:12.617 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.617380 | orchestrator | 05:26:12.617 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.617403 | orchestrator | 05:26:12.617 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.617434 | orchestrator | 05:26:12.617 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.617441 | orchestrator | 05:26:12.617 STDOUT terraform:  } 2025-09-29 05:26:12.617464 | orchestrator | 05:26:12.617 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.617492 | orchestrator | 05:26:12.617 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.617498 | orchestrator | 05:26:12.617 STDOUT terraform:  } 2025-09-29 05:26:12.617523 | orchestrator | 05:26:12.617 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.617551 | orchestrator | 05:26:12.617 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.617558 | orchestrator | 05:26:12.617 STDOUT terraform:  } 2025-09-29 05:26:12.617585 | orchestrator | 05:26:12.617 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.617591 | orchestrator | 05:26:12.617 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.617623 | orchestrator | 05:26:12.617 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-29 05:26:12.617661 | orchestrator | 05:26:12.617 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.617668 | orchestrator | 05:26:12.617 STDOUT terraform:  } 2025-09-29 05:26:12.617674 | orchestrator | 05:26:12.617 STDOUT terraform:  } 2025-09-29 05:26:12.617723 | orchestrator | 05:26:12.617 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-29 05:26:12.617769 | orchestrator | 05:26:12.617 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.617804 | orchestrator | 05:26:12.617 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.617840 | orchestrator | 05:26:12.617 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.617877 | orchestrator | 05:26:12.617 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.617917 | orchestrator | 05:26:12.617 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.617949 | orchestrator | 05:26:12.617 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.617986 | orchestrator | 05:26:12.617 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.618055 | orchestrator | 05:26:12.617 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.618096 | orchestrator | 05:26:12.618 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.618135 | orchestrator | 05:26:12.618 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.618171 | orchestrator | 05:26:12.618 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.618214 | orchestrator | 05:26:12.618 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.618246 | orchestrator | 05:26:12.618 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.618284 | orchestrator | 05:26:12.618 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.618319 | orchestrator | 05:26:12.618 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.618357 | orchestrator | 05:26:12.618 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.618402 | orchestrator | 05:26:12.618 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.618409 | orchestrator | 05:26:12.618 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.618440 | orchestrator | 05:26:12.618 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.618448 | orchestrator | 05:26:12.618 STDOUT terraform:  } 2025-09-29 05:26:12.618472 | orchestrator | 05:26:12.618 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.618502 | orchestrator | 05:26:12.618 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.618509 | orchestrator | 05:26:12.618 STDOUT terraform:  } 2025-09-29 05:26:12.618534 | orchestrator | 05:26:12.618 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.618563 | orchestrator | 05:26:12.618 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.618570 | orchestrator | 05:26:12.618 STDOUT terraform:  } 2025-09-29 05:26:12.618604 | orchestrator | 05:26:12.618 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.618610 | orchestrator | 05:26:12.618 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.618632 | orchestrator | 05:26:12.618 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-29 05:26:12.618682 | orchestrator | 05:26:12.618 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.618688 | orchestrator | 05:26:12.618 STDOUT terraform:  } 2025-09-29 05:26:12.618694 | orchestrator | 05:26:12.618 STDOUT terraform:  } 2025-09-29 05:26:12.618741 | orchestrator | 05:26:12.618 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-29 05:26:12.618784 | orchestrator | 05:26:12.618 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-29 05:26:12.618816 | orchestrator | 05:26:12.618 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.618911 | orchestrator | 05:26:12.618 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-29 05:26:12.618919 | orchestrator | 05:26:12.618 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-29 05:26:12.619024 | orchestrator | 05:26:12.618 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.619058 | orchestrator | 05:26:12.619 STDOUT terraform:  + device_id = (known after apply) 2025-09-29 05:26:12.619092 | orchestrator | 05:26:12.619 STDOUT terraform:  + device_owner = (known after apply) 2025-09-29 05:26:12.619130 | orchestrator | 05:26:12.619 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-29 05:26:12.619176 | orchestrator | 05:26:12.619 STDOUT terraform:  + dns_name = (known after apply) 2025-09-29 05:26:12.619209 | orchestrator | 05:26:12.619 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.619247 | orchestrator | 05:26:12.619 STDOUT terraform:  + mac_address = (known after apply) 2025-09-29 05:26:12.619289 | orchestrator | 05:26:12.619 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.619322 | orchestrator | 05:26:12.619 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-29 05:26:12.619359 | orchestrator | 05:26:12.619 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-29 05:26:12.619399 | orchestrator | 05:26:12.619 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.619452 | orchestrator | 05:26:12.619 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-29 05:26:12.619487 | orchestrator | 05:26:12.619 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.619509 | orchestrator | 05:26:12.619 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.619549 | orchestrator | 05:26:12.619 STDOUT terraform:  + ip_address = "192.168.16.254/32" 2025-09-29 05:26:12.619558 | orchestrator | 05:26:12.619 STDOUT terraform:  } 2025-09-29 05:26:12.619580 | orchestrator | 05:26:12.619 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.619609 | orchestrator | 05:26:12.619 STDOUT terraform:  + ip_address = "192.168.16.8/32" 2025-09-29 05:26:12.619617 | orchestrator | 05:26:12.619 STDOUT terraform:  } 2025-09-29 05:26:12.619636 | orchestrator | 05:26:12.619 STDOUT terraform:  + allowed_address_pairs { 2025-09-29 05:26:12.619676 | orchestrator | 05:26:12.619 STDOUT terraform:  + ip_address = "192.168.16.9/32" 2025-09-29 05:26:12.619698 | orchestrator | 05:26:12.619 STDOUT terraform:  } 2025-09-29 05:26:12.619725 | orchestrator | 05:26:12.619 STDOUT terraform:  + binding (known after apply) 2025-09-29 05:26:12.619732 | orchestrator | 05:26:12.619 STDOUT terraform:  + fixed_ip { 2025-09-29 05:26:12.619762 | orchestrator | 05:26:12.619 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-29 05:26:12.620227 | orchestrator | 05:26:12.619 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.620238 | orchestrator | 05:26:12.619 STDOUT terraform:  } 2025-09-29 05:26:12.620242 | orchestrator | 05:26:12.619 STDOUT terraform:  } 2025-09-29 05:26:12.620246 | orchestrator | 05:26:12.619 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-29 05:26:12.620250 | orchestrator | 05:26:12.619 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-29 05:26:12.620254 | orchestrator | 05:26:12.619 STDOUT terraform:  + force_destroy = false 2025-09-29 05:26:12.620266 | orchestrator | 05:26:12.619 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.620270 | orchestrator | 05:26:12.619 STDOUT terraform:  + port_id = (known after apply) 2025-09-29 05:26:12.620274 | orchestrator | 05:26:12.619 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.620278 | orchestrator | 05:26:12.619 STDOUT terraform:  + router_id = (known after apply) 2025-09-29 05:26:12.620281 | orchestrator | 05:26:12.620 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-29 05:26:12.620285 | orchestrator | 05:26:12.620 STDOUT terraform:  } 2025-09-29 05:26:12.620289 | orchestrator | 05:26:12.620 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-29 05:26:12.620300 | orchestrator | 05:26:12.620 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-29 05:26:12.620304 | orchestrator | 05:26:12.620 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-29 05:26:12.620307 | orchestrator | 05:26:12.620 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.620311 | orchestrator | 05:26:12.620 STDOUT terraform:  + availability_zone_hints = [ 2025-09-29 05:26:12.620315 | orchestrator | 05:26:12.620 STDOUT terraform:  + "nova", 2025-09-29 05:26:12.620319 | orchestrator | 05:26:12.620 STDOUT terraform:  ] 2025-09-29 05:26:12.620325 | orchestrator | 05:26:12.620 STDOUT terraform:  + distributed = (known after apply) 2025-09-29 05:26:12.620329 | orchestrator | 05:26:12.620 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-29 05:26:12.620333 | orchestrator | 05:26:12.620 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-29 05:26:12.620354 | orchestrator | 05:26:12.620 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-29 05:26:12.620391 | orchestrator | 05:26:12.620 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.620421 | orchestrator | 05:26:12.620 STDOUT terraform:  + name = "testbed" 2025-09-29 05:26:12.620460 | orchestrator | 05:26:12.620 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.620496 | orchestrator | 05:26:12.620 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.620526 | orchestrator | 05:26:12.620 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-29 05:26:12.620551 | orchestrator | 05:26:12.620 STDOUT terraform:  } 2025-09-29 05:26:12.620623 | orchestrator | 05:26:12.620 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-29 05:26:12.620703 | orchestrator | 05:26:12.620 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-29 05:26:12.620730 | orchestrator | 05:26:12.620 STDOUT terraform:  + description = "ssh" 2025-09-29 05:26:12.620765 | orchestrator | 05:26:12.620 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.620792 | orchestrator | 05:26:12.620 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.620832 | orchestrator | 05:26:12.620 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.620847 | orchestrator | 05:26:12.620 STDOUT terraform:  + port_range_max = 22 2025-09-29 05:26:12.620878 | orchestrator | 05:26:12.620 STDOUT terraform:  + port_range_min = 22 2025-09-29 05:26:12.620909 | orchestrator | 05:26:12.620 STDOUT terraform:  + protocol = "tcp" 2025-09-29 05:26:12.620957 | orchestrator | 05:26:12.620 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.621003 | orchestrator | 05:26:12.620 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.621040 | orchestrator | 05:26:12.620 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.621064 | orchestrator | 05:26:12.621 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.621104 | orchestrator | 05:26:12.621 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.621141 | orchestrator | 05:26:12.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.621149 | orchestrator | 05:26:12.621 STDOUT terraform:  } 2025-09-29 05:26:12.621220 | orchestrator | 05:26:12.621 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-29 05:26:12.621269 | orchestrator | 05:26:12.621 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-29 05:26:12.621300 | orchestrator | 05:26:12.621 STDOUT terraform:  + description = "wireguard" 2025-09-29 05:26:12.621335 | orchestrator | 05:26:12.621 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.621376 | orchestrator | 05:26:12.621 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.621410 | orchestrator | 05:26:12.621 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.621435 | orchestrator | 05:26:12.621 STDOUT terraform:  + port_range_max = 51820 2025-09-29 05:26:12.621461 | orchestrator | 05:26:12.621 STDOUT terraform:  + port_range_min = 51820 2025-09-29 05:26:12.621485 | orchestrator | 05:26:12.621 STDOUT terraform:  + protocol = "udp" 2025-09-29 05:26:12.621522 | orchestrator | 05:26:12.621 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.621564 | orchestrator | 05:26:12.621 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.621601 | orchestrator | 05:26:12.621 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.621661 | orchestrator | 05:26:12.621 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.621684 | orchestrator | 05:26:12.621 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.621731 | orchestrator | 05:26:12.621 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.621739 | orchestrator | 05:26:12.621 STDOUT terraform:  } 2025-09-29 05:26:12.621822 | orchestrator | 05:26:12.621 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-29 05:26:12.621882 | orchestrator | 05:26:12.621 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-29 05:26:12.621912 | orchestrator | 05:26:12.621 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.621940 | orchestrator | 05:26:12.621 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.621985 | orchestrator | 05:26:12.621 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.622010 | orchestrator | 05:26:12.621 STDOUT terraform:  + protocol = "tcp" 2025-09-29 05:26:12.626079 | orchestrator | 05:26:12.622 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626116 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626120 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626124 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-29 05:26:12.626129 | orchestrator | 05:26:12.622 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626133 | orchestrator | 05:26:12.622 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626136 | orchestrator | 05:26:12.622 STDOUT terraform:  } 2025-09-29 05:26:12.626141 | orchestrator | 05:26:12.622 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-29 05:26:12.626145 | orchestrator | 05:26:12.622 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-29 05:26:12.626149 | orchestrator | 05:26:12.622 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626153 | orchestrator | 05:26:12.622 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626156 | orchestrator | 05:26:12.622 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626160 | orchestrator | 05:26:12.622 STDOUT terraform:  + protocol = "udp" 2025-09-29 05:26:12.626164 | orchestrator | 05:26:12.622 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626167 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626171 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626175 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-29 05:26:12.626179 | orchestrator | 05:26:12.622 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626186 | orchestrator | 05:26:12.622 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626190 | orchestrator | 05:26:12.622 STDOUT terraform:  } 2025-09-29 05:26:12.626194 | orchestrator | 05:26:12.622 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-29 05:26:12.626197 | orchestrator | 05:26:12.622 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-29 05:26:12.626201 | orchestrator | 05:26:12.622 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626205 | orchestrator | 05:26:12.622 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626209 | orchestrator | 05:26:12.622 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626223 | orchestrator | 05:26:12.622 STDOUT terraform:  + protocol = "icmp" 2025-09-29 05:26:12.626227 | orchestrator | 05:26:12.622 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626230 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626234 | orchestrator | 05:26:12.622 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626238 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.626242 | orchestrator | 05:26:12.623 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626245 | orchestrator | 05:26:12.623 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626249 | orchestrator | 05:26:12.623 STDOUT terraform:  } 2025-09-29 05:26:12.626253 | orchestrator | 05:26:12.623 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-29 05:26:12.626263 | orchestrator | 05:26:12.623 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-29 05:26:12.626267 | orchestrator | 05:26:12.623 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626270 | orchestrator | 05:26:12.623 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626274 | orchestrator | 05:26:12.623 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626278 | orchestrator | 05:26:12.623 STDOUT terraform:  + protocol = "tcp" 2025-09-29 05:26:12.626281 | orchestrator | 05:26:12.623 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626285 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626289 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626293 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.626296 | orchestrator | 05:26:12.623 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626300 | orchestrator | 05:26:12.623 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626304 | orchestrator | 05:26:12.623 STDOUT terraform:  } 2025-09-29 05:26:12.626307 | orchestrator | 05:26:12.623 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-29 05:26:12.626311 | orchestrator | 05:26:12.623 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-29 05:26:12.626315 | orchestrator | 05:26:12.623 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626318 | orchestrator | 05:26:12.623 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626322 | orchestrator | 05:26:12.623 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626326 | orchestrator | 05:26:12.623 STDOUT terraform:  + protocol = "udp" 2025-09-29 05:26:12.626330 | orchestrator | 05:26:12.623 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626336 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626345 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626348 | orchestrator | 05:26:12.623 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.626352 | orchestrator | 05:26:12.623 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626357 | orchestrator | 05:26:12.623 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626360 | orchestrator | 05:26:12.623 STDOUT terraform:  } 2025-09-29 05:26:12.626364 | orchestrator | 05:26:12.623 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-29 05:26:12.626368 | orchestrator | 05:26:12.623 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-29 05:26:12.626371 | orchestrator | 05:26:12.624 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626375 | orchestrator | 05:26:12.624 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626380 | orchestrator | 05:26:12.624 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626383 | orchestrator | 05:26:12.624 STDOUT terraform:  + protocol = "icmp" 2025-09-29 05:26:12.626387 | orchestrator | 05:26:12.624 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626391 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626394 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626398 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.626402 | orchestrator | 05:26:12.624 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626411 | orchestrator | 05:26:12.624 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626414 | orchestrator | 05:26:12.624 STDOUT terraform:  } 2025-09-29 05:26:12.626418 | orchestrator | 05:26:12.624 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-29 05:26:12.626422 | orchestrator | 05:26:12.624 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-29 05:26:12.626426 | orchestrator | 05:26:12.624 STDOUT terraform:  + description = "vrrp" 2025-09-29 05:26:12.626430 | orchestrator | 05:26:12.624 STDOUT terraform:  + direction = "ingress" 2025-09-29 05:26:12.626433 | orchestrator | 05:26:12.624 STDOUT terraform:  + ethertype = "IPv4" 2025-09-29 05:26:12.626437 | orchestrator | 05:26:12.624 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626441 | orchestrator | 05:26:12.624 STDOUT terraform:  + protocol = "112" 2025-09-29 05:26:12.626444 | orchestrator | 05:26:12.624 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626448 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-29 05:26:12.626452 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-29 05:26:12.626460 | orchestrator | 05:26:12.624 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-29 05:26:12.626463 | orchestrator | 05:26:12.624 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-29 05:26:12.626467 | orchestrator | 05:26:12.624 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626471 | orchestrator | 05:26:12.624 STDOUT terraform:  } 2025-09-29 05:26:12.626475 | orchestrator | 05:26:12.624 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-29 05:26:12.626478 | orchestrator | 05:26:12.624 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-29 05:26:12.626482 | orchestrator | 05:26:12.624 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.626486 | orchestrator | 05:26:12.624 STDOUT terraform:  + description = "management security group" 2025-09-29 05:26:12.626490 | orchestrator | 05:26:12.624 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626493 | orchestrator | 05:26:12.624 STDOUT terraform:  + name = "testbed-management" 2025-09-29 05:26:12.626497 | orchestrator | 05:26:12.624 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626501 | orchestrator | 05:26:12.624 STDOUT terraform:  + stateful = (known after apply) 2025-09-29 05:26:12.626505 | orchestrator | 05:26:12.624 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626508 | orchestrator | 05:26:12.625 STDOUT terraform:  } 2025-09-29 05:26:12.626512 | orchestrator | 05:26:12.625 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-29 05:26:12.626516 | orchestrator | 05:26:12.625 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-29 05:26:12.626519 | orchestrator | 05:26:12.625 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.626523 | orchestrator | 05:26:12.625 STDOUT terraform:  + description = "node security group" 2025-09-29 05:26:12.626527 | orchestrator | 05:26:12.625 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626530 | orchestrator | 05:26:12.625 STDOUT terraform:  + name = "testbed-node" 2025-09-29 05:26:12.626535 | orchestrator | 05:26:12.625 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626538 | orchestrator | 05:26:12.625 STDOUT terraform:  + stateful = (known after apply) 2025-09-29 05:26:12.626542 | orchestrator | 05:26:12.625 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626546 | orchestrator | 05:26:12.625 STDOUT terraform:  } 2025-09-29 05:26:12.626549 | orchestrator | 05:26:12.625 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-29 05:26:12.626557 | orchestrator | 05:26:12.625 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-29 05:26:12.626561 | orchestrator | 05:26:12.625 STDOUT terraform:  + all_tags = (known after apply) 2025-09-29 05:26:12.626565 | orchestrator | 05:26:12.625 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-29 05:26:12.626569 | orchestrator | 05:26:12.625 STDOUT terraform:  + dns_nameservers = [ 2025-09-29 05:26:12.626575 | orchestrator | 05:26:12.625 STDOUT terraform:  + "8.8.8.8", 2025-09-29 05:26:12.626579 | orchestrator | 05:26:12.625 STDOUT terraform:  + "9.9.9.9", 2025-09-29 05:26:12.626583 | orchestrator | 05:26:12.625 STDOUT terraform:  ] 2025-09-29 05:26:12.626587 | orchestrator | 05:26:12.625 STDOUT terraform:  + enable_dhcp = true 2025-09-29 05:26:12.626590 | orchestrator | 05:26:12.625 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-29 05:26:12.626594 | orchestrator | 05:26:12.625 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626598 | orchestrator | 05:26:12.625 STDOUT terraform:  + ip_version = 4 2025-09-29 05:26:12.626601 | orchestrator | 05:26:12.625 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-29 05:26:12.626605 | orchestrator | 05:26:12.625 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-29 05:26:12.626609 | orchestrator | 05:26:12.625 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-29 05:26:12.626613 | orchestrator | 05:26:12.625 STDOUT terraform:  + network_id = (known after apply) 2025-09-29 05:26:12.626616 | orchestrator | 05:26:12.625 STDOUT terraform:  + no_gateway = false 2025-09-29 05:26:12.626620 | orchestrator | 05:26:12.625 STDOUT terraform:  + region = (known after apply) 2025-09-29 05:26:12.626654 | orchestrator | 05:26:12.625 STDOUT terraform:  + service_types = (known after apply) 2025-09-29 05:26:12.626659 | orchestrator | 05:26:12.625 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-29 05:26:12.626663 | orchestrator | 05:26:12.625 STDOUT terraform:  + allocation_pool { 2025-09-29 05:26:12.626667 | orchestrator | 05:26:12.625 STDOUT terraform:  + end = "192.168.31.250" 2025-09-29 05:26:12.626673 | orchestrator | 05:26:12.625 STDOUT terraform:  + start = "192.168.31.200" 2025-09-29 05:26:12.626677 | orchestrator | 05:26:12.625 STDOUT terraform:  } 2025-09-29 05:26:12.626681 | orchestrator | 05:26:12.625 STDOUT terraform:  } 2025-09-29 05:26:12.626685 | orchestrator | 05:26:12.625 STDOUT terraform:  # terraform_data.image will be created 2025-09-29 05:26:12.626688 | orchestrator | 05:26:12.625 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-29 05:26:12.626692 | orchestrator | 05:26:12.625 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626696 | orchestrator | 05:26:12.625 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-29 05:26:12.626700 | orchestrator | 05:26:12.625 STDOUT terraform:  + output = (known after apply) 2025-09-29 05:26:12.626703 | orchestrator | 05:26:12.626 STDOUT terraform:  } 2025-09-29 05:26:12.626707 | orchestrator | 05:26:12.626 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-29 05:26:12.626711 | orchestrator | 05:26:12.626 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-29 05:26:12.626714 | orchestrator | 05:26:12.626 STDOUT terraform:  + id = (known after apply) 2025-09-29 05:26:12.626718 | orchestrator | 05:26:12.626 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-29 05:26:12.626722 | orchestrator | 05:26:12.626 STDOUT terraform:  + output = (known after apply) 2025-09-29 05:26:12.626725 | orchestrator | 05:26:12.626 STDOUT terraform:  } 2025-09-29 05:26:12.626730 | orchestrator | 05:26:12.626 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-29 05:26:12.626738 | orchestrator | 05:26:12.626 STDOUT terraform: Changes to Outputs: 2025-09-29 05:26:12.626741 | orchestrator | 05:26:12.626 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-29 05:26:12.626745 | orchestrator | 05:26:12.626 STDOUT terraform:  + private_key = (sensitive value) 2025-09-29 05:26:12.770360 | orchestrator | 05:26:12.770 STDOUT terraform: terraform_data.image: Creating... 2025-09-29 05:26:12.770429 | orchestrator | 05:26:12.770 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=34be4e6d-bab3-5c1f-1234-3906efd1bd19] 2025-09-29 05:26:12.770436 | orchestrator | 05:26:12.770 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-29 05:26:12.771932 | orchestrator | 05:26:12.771 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=3558c30b-f066-f055-f604-2e332439408e] 2025-09-29 05:26:12.798241 | orchestrator | 05:26:12.798 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-29 05:26:12.803465 | orchestrator | 05:26:12.803 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-29 05:26:12.814199 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-29 05:26:12.814247 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-29 05:26:12.814255 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-29 05:26:12.814291 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-29 05:26:12.814344 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-29 05:26:12.814403 | orchestrator | 05:26:12.814 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-29 05:26:12.820065 | orchestrator | 05:26:12.819 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-29 05:26:12.834617 | orchestrator | 05:26:12.834 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-29 05:26:13.260463 | orchestrator | 05:26:13.260 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-29 05:26:13.265575 | orchestrator | 05:26:13.265 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-29 05:26:13.316849 | orchestrator | 05:26:13.316 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-29 05:26:13.323993 | orchestrator | 05:26:13.323 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-29 05:26:13.376102 | orchestrator | 05:26:13.375 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-29 05:26:13.381685 | orchestrator | 05:26:13.381 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-29 05:26:13.950956 | orchestrator | 05:26:13.950 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=d182f822-6f64-4362-a654-7127d31d0b1a] 2025-09-29 05:26:13.990204 | orchestrator | 05:26:13.957 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-29 05:26:16.464431 | orchestrator | 05:26:16.464 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=5f30f287-1956-4b14-b1b3-d656c5604e8f] 2025-09-29 05:26:16.486195 | orchestrator | 05:26:16.486 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=212523ac-09f9-4a75-841f-e4e8427949d1] 2025-09-29 05:26:16.499148 | orchestrator | 05:26:16.499 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-29 05:26:16.501001 | orchestrator | 05:26:16.500 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=a26f0dd0-3def-45cb-a526-391b85857c60] 2025-09-29 05:26:16.510120 | orchestrator | 05:26:16.506 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 1s [id=06f82a8b89729b8513cd95d26c4f2e7ceb43ee63] 2025-09-29 05:26:16.510167 | orchestrator | 05:26:16.507 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-29 05:26:16.512437 | orchestrator | 05:26:16.512 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-29 05:26:16.512496 | orchestrator | 05:26:16.512 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=96ce41040e8dd3e24a3d965c26c7e381928ecde0] 2025-09-29 05:26:16.518074 | orchestrator | 05:26:16.517 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=6f7dc170-46a8-451b-ba46-45ea4054a55a] 2025-09-29 05:26:16.519687 | orchestrator | 05:26:16.518 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-29 05:26:16.530692 | orchestrator | 05:26:16.530 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-29 05:26:16.530912 | orchestrator | 05:26:16.530 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=47886bdb-eb57-4895-bb6c-095bf009f1bc] 2025-09-29 05:26:16.539547 | orchestrator | 05:26:16.532 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=a19be117-9776-4997-9c5a-50a933b8c330] 2025-09-29 05:26:16.539594 | orchestrator | 05:26:16.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-29 05:26:16.549181 | orchestrator | 05:26:16.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-29 05:26:16.549306 | orchestrator | 05:26:16.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-29 05:26:16.583040 | orchestrator | 05:26:16.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=975b133b-dd90-41fb-addf-6e21202a98ee] 2025-09-29 05:26:16.587803 | orchestrator | 05:26:16.587 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-29 05:26:16.595077 | orchestrator | 05:26:16.594 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=9d6ffe74-7843-4b92-a660-34a8dc91d495] 2025-09-29 05:26:16.939054 | orchestrator | 05:26:16.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=a41b09bf-4033-4d86-9fc9-338370a7c5d5] 2025-09-29 05:26:17.295889 | orchestrator | 05:26:17.295 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=44392fa6-8e5a-4556-b3d8-faab9d48ed51] 2025-09-29 05:26:17.532987 | orchestrator | 05:26:17.532 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=48e7ba43-9325-4ea1-a759-9b76e8795e19] 2025-09-29 05:26:17.539969 | orchestrator | 05:26:17.539 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-29 05:26:19.966213 | orchestrator | 05:26:19.965 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=91d6b097-cf49-4bc5-9189-b5fe273ac0cf] 2025-09-29 05:26:19.996091 | orchestrator | 05:26:19.995 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=8cd16bf8-25b0-486d-8255-2bac14d23493] 2025-09-29 05:26:20.010404 | orchestrator | 05:26:20.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=3086d38e-d295-49b8-8314-7ddf42b6d254] 2025-09-29 05:26:20.010448 | orchestrator | 05:26:20.010 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=8960639b-518d-4917-8774-29b1873047c4] 2025-09-29 05:26:20.025870 | orchestrator | 05:26:20.025 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96] 2025-09-29 05:26:20.031221 | orchestrator | 05:26:20.030 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 2s [id=bc9c9512-c569-4608-8251-aff3192f7b23] 2025-09-29 05:26:20.038569 | orchestrator | 05:26:20.037 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-29 05:26:20.039859 | orchestrator | 05:26:20.039 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-29 05:26:20.039921 | orchestrator | 05:26:20.039 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-29 05:26:20.072474 | orchestrator | 05:26:20.071 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=dcb9eb34-30b6-467b-99a0-70fbe86f795a] 2025-09-29 05:26:20.254954 | orchestrator | 05:26:20.254 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=d98411fc-faa7-4481-aeca-783d7f1fb80b] 2025-09-29 05:26:20.281469 | orchestrator | 05:26:20.281 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-29 05:26:20.283126 | orchestrator | 05:26:20.283 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-29 05:26:20.284822 | orchestrator | 05:26:20.284 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-29 05:26:20.289704 | orchestrator | 05:26:20.289 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-29 05:26:20.291561 | orchestrator | 05:26:20.291 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-29 05:26:20.291591 | orchestrator | 05:26:20.291 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-29 05:26:20.632298 | orchestrator | 05:26:20.631 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=d04d4f5e-d55d-4744-b871-fcb1c09017d4] 2025-09-29 05:26:20.897058 | orchestrator | 05:26:20.896 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=4cc9ce66-554d-4a04-bcb7-85bec02b79b2] 2025-09-29 05:26:20.906927 | orchestrator | 05:26:20.906 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-29 05:26:20.908182 | orchestrator | 05:26:20.907 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-29 05:26:20.913457 | orchestrator | 05:26:20.913 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-29 05:26:20.915302 | orchestrator | 05:26:20.915 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=2aaf96c5-6f10-4f7f-9c57-e955974dc98a] 2025-09-29 05:26:20.915344 | orchestrator | 05:26:20.915 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-29 05:26:20.922721 | orchestrator | 05:26:20.922 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-29 05:26:21.174489 | orchestrator | 05:26:21.174 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=6565ebd7-0bad-4728-82c3-64ef0da55764] 2025-09-29 05:26:21.191202 | orchestrator | 05:26:21.190 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-29 05:26:21.232751 | orchestrator | 05:26:21.232 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=1a6c12da-e952-48c7-b0fd-37630e6c8717] 2025-09-29 05:26:21.244988 | orchestrator | 05:26:21.244 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-29 05:26:21.391703 | orchestrator | 05:26:21.391 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=21c579b9-3844-4963-920b-037bff5f616c] 2025-09-29 05:26:21.407214 | orchestrator | 05:26:21.406 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-29 05:26:21.426353 | orchestrator | 05:26:21.425 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=295b78e6-57f4-4595-9f53-056bdf4ae001] 2025-09-29 05:26:21.434899 | orchestrator | 05:26:21.434 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=85ad8370-4f8d-4313-84f4-9ebafb85ac00] 2025-09-29 05:26:21.441449 | orchestrator | 05:26:21.441 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-29 05:26:21.442242 | orchestrator | 05:26:21.442 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-29 05:26:21.593881 | orchestrator | 05:26:21.593 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 2s [id=bc9e5333-1676-4a47-8340-8b8df086185a] 2025-09-29 05:26:21.623153 | orchestrator | 05:26:21.622 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d260b6ce-a683-4741-b452-e8d20fd9af58] 2025-09-29 05:26:21.809796 | orchestrator | 05:26:21.809 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=0c44667c-06fa-4288-963d-014c6f2837a3] 2025-09-29 05:26:21.908157 | orchestrator | 05:26:21.907 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=6dd62bde-481a-48f2-81b4-eb91bee3b1ae] 2025-09-29 05:26:21.990785 | orchestrator | 05:26:21.990 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=6f39fb26-5ac0-4b29-99cb-cd6dfecb42a8] 2025-09-29 05:26:22.126309 | orchestrator | 05:26:22.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=ef7d5621-3c2a-4d6d-9048-986fa7905f26] 2025-09-29 05:26:22.215579 | orchestrator | 05:26:22.215 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=edc3ad44-6dbe-42de-9bc0-6229599565bc] 2025-09-29 05:26:22.456295 | orchestrator | 05:26:22.455 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d3dab759-1284-4317-9f0d-c714c6b6d53e] 2025-09-29 05:26:22.736120 | orchestrator | 05:26:22.735 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 2s [id=8b035d24-02a6-45f2-8d2a-ca132bc1a781] 2025-09-29 05:26:22.966944 | orchestrator | 05:26:22.966 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=fdc7030d-c62e-4186-ae81-1f254d38a7f2] 2025-09-29 05:26:22.985447 | orchestrator | 05:26:22.985 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-29 05:26:22.997106 | orchestrator | 05:26:22.996 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-29 05:26:22.999126 | orchestrator | 05:26:22.999 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-29 05:26:23.003730 | orchestrator | 05:26:23.003 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-29 05:26:23.004769 | orchestrator | 05:26:23.004 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-29 05:26:23.007837 | orchestrator | 05:26:23.007 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-29 05:26:23.031918 | orchestrator | 05:26:23.031 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-29 05:26:24.956494 | orchestrator | 05:26:24.956 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=0e7d388a-13b5-4f39-86c9-2b7f24584162] 2025-09-29 05:26:24.972785 | orchestrator | 05:26:24.972 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-29 05:26:24.977972 | orchestrator | 05:26:24.977 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-29 05:26:24.978044 | orchestrator | 05:26:24.977 STDOUT terraform: local_file.inventory: Creating... 2025-09-29 05:26:24.984800 | orchestrator | 05:26:24.984 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b91fd14604e7cbff4658e6c2802388391487ce9c] 2025-09-29 05:26:24.985736 | orchestrator | 05:26:24.985 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=4ece67e5f9b5d7b0aadbeed36313139e691c686e] 2025-09-29 05:26:25.693692 | orchestrator | 05:26:25.693 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=0e7d388a-13b5-4f39-86c9-2b7f24584162] 2025-09-29 05:26:33.004664 | orchestrator | 05:26:33.004 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-29 05:26:33.004817 | orchestrator | 05:26:33.004 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-29 05:26:33.004836 | orchestrator | 05:26:33.004 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-29 05:26:33.005469 | orchestrator | 05:26:33.005 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-29 05:26:33.008706 | orchestrator | 05:26:33.008 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-29 05:26:33.033242 | orchestrator | 05:26:33.033 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-29 05:26:43.005158 | orchestrator | 05:26:43.004 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-29 05:26:43.006545 | orchestrator | 05:26:43.006 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-29 05:26:43.006858 | orchestrator | 05:26:43.006 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-29 05:26:43.006943 | orchestrator | 05:26:43.006 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-29 05:26:43.009260 | orchestrator | 05:26:43.009 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-29 05:26:43.034358 | orchestrator | 05:26:43.034 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-29 05:26:43.404890 | orchestrator | 05:26:43.404 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=dc8fd3c8-acd9-4b83-a8cd-e793dc06c87d] 2025-09-29 05:26:43.563402 | orchestrator | 05:26:43.563 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=953bcbd2-1344-44f4-8f1b-6c878d173c07] 2025-09-29 05:26:43.662260 | orchestrator | 05:26:43.661 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=25c3e87b-0f90-4d65-ae5c-7268453ed0a2] 2025-09-29 05:26:53.005576 | orchestrator | 05:26:53.005 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-29 05:26:53.005749 | orchestrator | 05:26:53.005 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-29 05:26:53.034973 | orchestrator | 05:26:53.034 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-29 05:26:53.771202 | orchestrator | 05:26:53.769 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=f65ca19d-deae-4cb0-8fe4-f98192afee54] 2025-09-29 05:26:53.986138 | orchestrator | 05:26:53.985 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=f27ce7a8-9b9e-42b5-b92e-aa107d93eeff] 2025-09-29 05:26:54.410795 | orchestrator | 05:26:54.407 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=2e38ed64-6f81-4658-926f-9ba48ad617d9] 2025-09-29 05:26:54.424467 | orchestrator | 05:26:54.424 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-29 05:26:54.436178 | orchestrator | 05:26:54.435 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8440771390501248608] 2025-09-29 05:26:54.447103 | orchestrator | 05:26:54.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-29 05:26:54.447243 | orchestrator | 05:26:54.447 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-29 05:26:54.451741 | orchestrator | 05:26:54.451 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-29 05:26:54.473099 | orchestrator | 05:26:54.472 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-29 05:26:54.476650 | orchestrator | 05:26:54.476 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-29 05:26:54.476906 | orchestrator | 05:26:54.476 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-29 05:26:54.480613 | orchestrator | 05:26:54.480 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-29 05:26:54.485003 | orchestrator | 05:26:54.484 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-29 05:26:54.487389 | orchestrator | 05:26:54.486 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-29 05:26:54.505955 | orchestrator | 05:26:54.505 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-29 05:26:57.880744 | orchestrator | 05:26:57.880 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=2e38ed64-6f81-4658-926f-9ba48ad617d9/6f7dc170-46a8-451b-ba46-45ea4054a55a] 2025-09-29 05:26:57.894366 | orchestrator | 05:26:57.893 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=dc8fd3c8-acd9-4b83-a8cd-e793dc06c87d/a26f0dd0-3def-45cb-a526-391b85857c60] 2025-09-29 05:26:57.915257 | orchestrator | 05:26:57.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=25c3e87b-0f90-4d65-ae5c-7268453ed0a2/a41b09bf-4033-4d86-9fc9-338370a7c5d5] 2025-09-29 05:26:58.031989 | orchestrator | 05:26:58.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=dc8fd3c8-acd9-4b83-a8cd-e793dc06c87d/975b133b-dd90-41fb-addf-6e21202a98ee] 2025-09-29 05:26:58.214591 | orchestrator | 05:26:58.214 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=25c3e87b-0f90-4d65-ae5c-7268453ed0a2/a19be117-9776-4997-9c5a-50a933b8c330] 2025-09-29 05:26:58.232677 | orchestrator | 05:26:58.232 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=2e38ed64-6f81-4658-926f-9ba48ad617d9/5f30f287-1956-4b14-b1b3-d656c5604e8f] 2025-09-29 05:27:04.336719 | orchestrator | 05:27:04.336 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=2e38ed64-6f81-4658-926f-9ba48ad617d9/47886bdb-eb57-4895-bb6c-095bf009f1bc] 2025-09-29 05:27:04.353878 | orchestrator | 05:27:04.353 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=25c3e87b-0f90-4d65-ae5c-7268453ed0a2/212523ac-09f9-4a75-841f-e4e8427949d1] 2025-09-29 05:27:04.377469 | orchestrator | 05:27:04.377 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=dc8fd3c8-acd9-4b83-a8cd-e793dc06c87d/9d6ffe74-7843-4b92-a660-34a8dc91d495] 2025-09-29 05:27:04.509018 | orchestrator | 05:27:04.508 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-29 05:27:14.509463 | orchestrator | 05:27:14.509 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-29 05:27:14.770447 | orchestrator | 05:27:14.770 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=44829d7e-0765-40ef-a6be-495daf615bba] 2025-09-29 05:27:14.786850 | orchestrator | 05:27:14.786 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-29 05:27:14.786943 | orchestrator | 05:27:14.786 STDOUT terraform: Outputs: 2025-09-29 05:27:14.786961 | orchestrator | 05:27:14.786 STDOUT terraform: manager_address = 2025-09-29 05:27:14.786973 | orchestrator | 05:27:14.786 STDOUT terraform: private_key = 2025-09-29 05:27:15.051200 | orchestrator | ok: Runtime: 0:01:06.610208 2025-09-29 05:27:15.090953 | 2025-09-29 05:27:15.091120 | TASK [Create infrastructure (stable)] 2025-09-29 05:27:15.626283 | orchestrator | skipping: Conditional result was False 2025-09-29 05:27:15.645215 | 2025-09-29 05:27:15.645385 | TASK [Fetch manager address] 2025-09-29 05:27:16.071560 | orchestrator | ok 2025-09-29 05:27:16.078604 | 2025-09-29 05:27:16.078709 | TASK [Set manager_host address] 2025-09-29 05:27:16.156620 | orchestrator | ok 2025-09-29 05:27:16.165885 | 2025-09-29 05:27:16.166004 | LOOP [Update ansible collections] 2025-09-29 05:27:17.037910 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-29 05:27:17.038286 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-29 05:27:17.038348 | orchestrator | Starting galaxy collection install process 2025-09-29 05:27:17.038409 | orchestrator | Process install dependency map 2025-09-29 05:27:17.038447 | orchestrator | Starting collection install process 2025-09-29 05:27:17.038481 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-09-29 05:27:17.038518 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-09-29 05:27:17.038556 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-29 05:27:17.038633 | orchestrator | ok: Item: commons Runtime: 0:00:00.550745 2025-09-29 05:27:17.912452 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-29 05:27:17.912613 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-29 05:27:17.912748 | orchestrator | Starting galaxy collection install process 2025-09-29 05:27:17.912788 | orchestrator | Process install dependency map 2025-09-29 05:27:17.912823 | orchestrator | Starting collection install process 2025-09-29 05:27:17.912856 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-09-29 05:27:17.912889 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-09-29 05:27:17.912920 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-29 05:27:17.912985 | orchestrator | ok: Item: services Runtime: 0:00:00.566356 2025-09-29 05:27:17.934228 | 2025-09-29 05:27:17.934375 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-29 05:27:28.488151 | orchestrator | ok 2025-09-29 05:27:28.501745 | 2025-09-29 05:27:28.501889 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-29 05:28:28.546222 | orchestrator | ok 2025-09-29 05:28:28.553728 | 2025-09-29 05:28:28.553831 | TASK [Fetch manager ssh hostkey] 2025-09-29 05:28:30.138067 | orchestrator | Output suppressed because no_log was given 2025-09-29 05:28:30.153219 | 2025-09-29 05:28:30.153386 | TASK [Get ssh keypair from terraform environment] 2025-09-29 05:28:30.691210 | orchestrator | ok: Runtime: 0:00:00.009689 2025-09-29 05:28:30.707853 | 2025-09-29 05:28:30.708008 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-29 05:28:30.745632 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-29 05:28:30.754862 | 2025-09-29 05:28:30.754979 | TASK [Run manager part 0] 2025-09-29 05:28:31.549319 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-29 05:28:31.608152 | orchestrator | 2025-09-29 05:28:31.608193 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-29 05:28:31.608200 | orchestrator | 2025-09-29 05:28:31.608214 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-29 05:28:34.532846 | orchestrator | ok: [testbed-manager] 2025-09-29 05:28:34.532903 | orchestrator | 2025-09-29 05:28:34.532931 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-29 05:28:34.532944 | orchestrator | 2025-09-29 05:28:34.532956 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:28:36.610744 | orchestrator | ok: [testbed-manager] 2025-09-29 05:28:36.610811 | orchestrator | 2025-09-29 05:28:36.610819 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-29 05:28:37.316837 | orchestrator | ok: [testbed-manager] 2025-09-29 05:28:37.316956 | orchestrator | 2025-09-29 05:28:37.316989 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-29 05:28:37.367605 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.367666 | orchestrator | 2025-09-29 05:28:37.367676 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-29 05:28:37.397016 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.397063 | orchestrator | 2025-09-29 05:28:37.397071 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-29 05:28:37.422778 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.422819 | orchestrator | 2025-09-29 05:28:37.422824 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-29 05:28:37.449355 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.449433 | orchestrator | 2025-09-29 05:28:37.449439 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-29 05:28:37.478639 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.478696 | orchestrator | 2025-09-29 05:28:37.478704 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-29 05:28:37.525212 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.525281 | orchestrator | 2025-09-29 05:28:37.525290 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-29 05:28:37.568186 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:28:37.568241 | orchestrator | 2025-09-29 05:28:37.568249 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-29 05:28:38.397873 | orchestrator | changed: [testbed-manager] 2025-09-29 05:28:38.397948 | orchestrator | 2025-09-29 05:28:38.397960 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-29 05:31:06.921406 | orchestrator | changed: [testbed-manager] 2025-09-29 05:31:06.921480 | orchestrator | 2025-09-29 05:31:06.921619 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-29 05:32:45.953349 | orchestrator | changed: [testbed-manager] 2025-09-29 05:32:45.953399 | orchestrator | 2025-09-29 05:32:45.953409 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-29 05:33:08.632443 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:08.632534 | orchestrator | 2025-09-29 05:33:08.632552 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-29 05:33:17.600792 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:17.600833 | orchestrator | 2025-09-29 05:33:17.600841 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-29 05:33:17.646783 | orchestrator | ok: [testbed-manager] 2025-09-29 05:33:17.646826 | orchestrator | 2025-09-29 05:33:17.646836 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-29 05:33:18.444216 | orchestrator | ok: [testbed-manager] 2025-09-29 05:33:18.444333 | orchestrator | 2025-09-29 05:33:18.444352 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-29 05:33:19.187021 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:19.187061 | orchestrator | 2025-09-29 05:33:19.187070 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-29 05:33:25.925332 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:25.925423 | orchestrator | 2025-09-29 05:33:25.925468 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-29 05:33:32.101251 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:32.101338 | orchestrator | 2025-09-29 05:33:32.101357 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-29 05:33:34.640381 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:34.640475 | orchestrator | 2025-09-29 05:33:34.640494 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-29 05:33:36.325673 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:36.325718 | orchestrator | 2025-09-29 05:33:36.325727 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-29 05:33:37.421821 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-29 05:33:37.421892 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-29 05:33:37.421903 | orchestrator | 2025-09-29 05:33:37.421912 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-29 05:33:37.460616 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-29 05:33:37.460673 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-29 05:33:37.460683 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-29 05:33:37.460692 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-29 05:33:40.720328 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-29 05:33:40.720373 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-29 05:33:40.720380 | orchestrator | 2025-09-29 05:33:40.720386 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-29 05:33:41.350865 | orchestrator | changed: [testbed-manager] 2025-09-29 05:33:41.351025 | orchestrator | 2025-09-29 05:33:41.351035 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-29 05:35:00.661410 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-29 05:35:00.661532 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-29 05:35:00.661577 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-29 05:35:00.661591 | orchestrator | 2025-09-29 05:35:00.661604 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-29 05:35:02.873272 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-29 05:35:02.873366 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-29 05:35:02.873382 | orchestrator | 2025-09-29 05:35:02.873394 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-29 05:35:02.873406 | orchestrator | 2025-09-29 05:35:02.873418 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:35:04.232390 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:04.232452 | orchestrator | 2025-09-29 05:35:04.232470 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-29 05:35:04.281346 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:04.281399 | orchestrator | 2025-09-29 05:35:04.281407 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-29 05:35:04.343623 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:04.343668 | orchestrator | 2025-09-29 05:35:04.343675 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-29 05:35:05.084665 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:05.084733 | orchestrator | 2025-09-29 05:35:05.084751 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-29 05:35:05.832752 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:05.832845 | orchestrator | 2025-09-29 05:35:05.832861 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-29 05:35:07.286785 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-29 05:35:07.286816 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-29 05:35:07.286823 | orchestrator | 2025-09-29 05:35:07.286834 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-29 05:35:08.697353 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:08.697513 | orchestrator | 2025-09-29 05:35:08.697531 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-29 05:35:10.454854 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:35:10.454902 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-29 05:35:10.454910 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:35:10.454916 | orchestrator | 2025-09-29 05:35:10.454924 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-29 05:35:10.506148 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:10.506219 | orchestrator | 2025-09-29 05:35:10.506235 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-29 05:35:11.098839 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:11.098916 | orchestrator | 2025-09-29 05:35:11.098934 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-29 05:35:11.160930 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:11.160983 | orchestrator | 2025-09-29 05:35:11.160996 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-29 05:35:12.002597 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:35:12.002667 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:12.002682 | orchestrator | 2025-09-29 05:35:12.002695 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-29 05:35:12.040005 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:12.040067 | orchestrator | 2025-09-29 05:35:12.040083 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-29 05:35:12.072779 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:12.072839 | orchestrator | 2025-09-29 05:35:12.072853 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-29 05:35:12.108027 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:12.108088 | orchestrator | 2025-09-29 05:35:12.108111 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-29 05:35:12.168467 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:12.168528 | orchestrator | 2025-09-29 05:35:12.168565 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-29 05:35:12.864192 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:12.864217 | orchestrator | 2025-09-29 05:35:12.864222 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-29 05:35:12.864227 | orchestrator | 2025-09-29 05:35:12.864231 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:35:14.279917 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:14.279945 | orchestrator | 2025-09-29 05:35:14.279950 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-29 05:35:15.291753 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:15.291787 | orchestrator | 2025-09-29 05:35:15.291793 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:35:15.291799 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-29 05:35:15.291803 | orchestrator | 2025-09-29 05:35:15.519665 | orchestrator | ok: Runtime: 0:06:44.366262 2025-09-29 05:35:15.536099 | 2025-09-29 05:35:15.536231 | TASK [Point out that the log in on the manager is now possible] 2025-09-29 05:35:15.567910 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-29 05:35:15.575172 | 2025-09-29 05:35:15.575889 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-29 05:35:15.607639 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-29 05:35:15.616617 | 2025-09-29 05:35:15.616734 | TASK [Run manager part 1 + 2] 2025-09-29 05:35:16.424758 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-29 05:35:16.476438 | orchestrator | 2025-09-29 05:35:16.476485 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-29 05:35:16.476493 | orchestrator | 2025-09-29 05:35:16.476505 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:35:19.418321 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:19.418372 | orchestrator | 2025-09-29 05:35:19.418394 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-29 05:35:19.457134 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:19.457178 | orchestrator | 2025-09-29 05:35:19.457187 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-29 05:35:19.496100 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:19.496148 | orchestrator | 2025-09-29 05:35:19.496157 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-29 05:35:19.529719 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:19.529767 | orchestrator | 2025-09-29 05:35:19.529776 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-29 05:35:19.593271 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:19.593321 | orchestrator | 2025-09-29 05:35:19.593329 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-29 05:35:19.651296 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:19.651343 | orchestrator | 2025-09-29 05:35:19.651352 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-29 05:35:19.692188 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-29 05:35:19.692230 | orchestrator | 2025-09-29 05:35:19.692236 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-29 05:35:20.397871 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:20.397922 | orchestrator | 2025-09-29 05:35:20.397931 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-29 05:35:20.449972 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:20.450039 | orchestrator | 2025-09-29 05:35:20.450048 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-29 05:35:21.815589 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:21.815635 | orchestrator | 2025-09-29 05:35:21.815645 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-29 05:35:22.410615 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:22.410660 | orchestrator | 2025-09-29 05:35:22.410669 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-29 05:35:23.536374 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:23.536438 | orchestrator | 2025-09-29 05:35:23.536448 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-29 05:35:40.610953 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:40.611035 | orchestrator | 2025-09-29 05:35:40.611050 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-29 05:35:41.275576 | orchestrator | ok: [testbed-manager] 2025-09-29 05:35:41.275656 | orchestrator | 2025-09-29 05:35:41.275672 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-29 05:35:41.325519 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:41.325614 | orchestrator | 2025-09-29 05:35:41.325629 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-29 05:35:42.219497 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:42.219603 | orchestrator | 2025-09-29 05:35:42.219617 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-29 05:35:43.112072 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:43.112111 | orchestrator | 2025-09-29 05:35:43.112119 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-29 05:35:43.643661 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:43.643742 | orchestrator | 2025-09-29 05:35:43.643758 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-29 05:35:43.679801 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-29 05:35:43.679858 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-29 05:35:43.679864 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-29 05:35:43.679869 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-29 05:35:45.561205 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:45.561367 | orchestrator | 2025-09-29 05:35:45.561380 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-29 05:35:53.855958 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-29 05:35:53.856067 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-29 05:35:53.856096 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-29 05:35:53.856116 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-29 05:35:53.856144 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-29 05:35:53.856164 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-29 05:35:53.856182 | orchestrator | 2025-09-29 05:35:53.856203 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-29 05:35:54.779728 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:54.779801 | orchestrator | 2025-09-29 05:35:54.779813 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-29 05:35:54.821578 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:54.821632 | orchestrator | 2025-09-29 05:35:54.821640 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-29 05:35:57.985681 | orchestrator | changed: [testbed-manager] 2025-09-29 05:35:57.985721 | orchestrator | 2025-09-29 05:35:57.985730 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-29 05:35:58.025648 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:35:58.025687 | orchestrator | 2025-09-29 05:35:58.025696 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-29 05:37:29.918097 | orchestrator | changed: [testbed-manager] 2025-09-29 05:37:29.918201 | orchestrator | 2025-09-29 05:37:29.918220 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-29 05:37:31.097901 | orchestrator | ok: [testbed-manager] 2025-09-29 05:37:31.097988 | orchestrator | 2025-09-29 05:37:31.098006 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:37:31.098044 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-29 05:37:31.098058 | orchestrator | 2025-09-29 05:37:31.248238 | orchestrator | ok: Runtime: 0:02:15.268197 2025-09-29 05:37:31.260367 | 2025-09-29 05:37:31.260506 | TASK [Reboot manager] 2025-09-29 05:37:32.794074 | orchestrator | ok: Runtime: 0:00:00.968783 2025-09-29 05:37:32.811988 | 2025-09-29 05:37:32.812144 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-29 05:37:47.271969 | orchestrator | ok 2025-09-29 05:37:47.287815 | 2025-09-29 05:37:47.288074 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-29 05:38:47.334648 | orchestrator | ok 2025-09-29 05:38:47.344319 | 2025-09-29 05:38:47.344444 | TASK [Deploy manager + bootstrap nodes] 2025-09-29 05:38:49.908001 | orchestrator | 2025-09-29 05:38:49.908204 | orchestrator | # DEPLOY MANAGER 2025-09-29 05:38:49.908232 | orchestrator | 2025-09-29 05:38:49.908247 | orchestrator | + set -e 2025-09-29 05:38:49.908261 | orchestrator | + echo 2025-09-29 05:38:49.908276 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-29 05:38:49.908293 | orchestrator | + echo 2025-09-29 05:38:49.908345 | orchestrator | + cat /opt/manager-vars.sh 2025-09-29 05:38:49.911206 | orchestrator | export NUMBER_OF_NODES=6 2025-09-29 05:38:49.911301 | orchestrator | 2025-09-29 05:38:49.911320 | orchestrator | export CEPH_VERSION=reef 2025-09-29 05:38:49.911334 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-29 05:38:49.911347 | orchestrator | export MANAGER_VERSION=latest 2025-09-29 05:38:49.911370 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-29 05:38:49.911381 | orchestrator | 2025-09-29 05:38:49.911400 | orchestrator | export ARA=false 2025-09-29 05:38:49.911412 | orchestrator | export DEPLOY_MODE=manager 2025-09-29 05:38:49.911430 | orchestrator | export TEMPEST=false 2025-09-29 05:38:49.911441 | orchestrator | export IS_ZUUL=true 2025-09-29 05:38:49.911488 | orchestrator | 2025-09-29 05:38:49.911509 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 05:38:49.911521 | orchestrator | export EXTERNAL_API=false 2025-09-29 05:38:49.911532 | orchestrator | 2025-09-29 05:38:49.911543 | orchestrator | export IMAGE_USER=ubuntu 2025-09-29 05:38:49.911558 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-29 05:38:49.911569 | orchestrator | 2025-09-29 05:38:49.911580 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-29 05:38:49.911598 | orchestrator | 2025-09-29 05:38:49.911609 | orchestrator | + echo 2025-09-29 05:38:49.911622 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 05:38:49.912181 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 05:38:49.912201 | orchestrator | ++ INTERACTIVE=false 2025-09-29 05:38:49.912213 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 05:38:49.912225 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 05:38:49.912320 | orchestrator | + source /opt/manager-vars.sh 2025-09-29 05:38:49.912359 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-29 05:38:49.912371 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-29 05:38:49.912390 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-29 05:38:49.912401 | orchestrator | ++ CEPH_VERSION=reef 2025-09-29 05:38:49.912411 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-29 05:38:49.912483 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-29 05:38:49.912501 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 05:38:49.912512 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 05:38:49.912530 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-29 05:38:49.912552 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-29 05:38:49.912563 | orchestrator | ++ export ARA=false 2025-09-29 05:38:49.912574 | orchestrator | ++ ARA=false 2025-09-29 05:38:49.912585 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-29 05:38:49.912596 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-29 05:38:49.912607 | orchestrator | ++ export TEMPEST=false 2025-09-29 05:38:49.912617 | orchestrator | ++ TEMPEST=false 2025-09-29 05:38:49.912628 | orchestrator | ++ export IS_ZUUL=true 2025-09-29 05:38:49.912639 | orchestrator | ++ IS_ZUUL=true 2025-09-29 05:38:49.912653 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 05:38:49.912665 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 05:38:49.912676 | orchestrator | ++ export EXTERNAL_API=false 2025-09-29 05:38:49.912686 | orchestrator | ++ EXTERNAL_API=false 2025-09-29 05:38:49.912697 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-29 05:38:49.912708 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-29 05:38:49.912719 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-29 05:38:49.912729 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-29 05:38:49.912740 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-29 05:38:49.912751 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-29 05:38:49.912762 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-29 05:38:49.969559 | orchestrator | + docker version 2025-09-29 05:38:50.252105 | orchestrator | Client: Docker Engine - Community 2025-09-29 05:38:50.252208 | orchestrator | Version: 27.5.1 2025-09-29 05:38:50.252226 | orchestrator | API version: 1.47 2025-09-29 05:38:50.252238 | orchestrator | Go version: go1.22.11 2025-09-29 05:38:50.252248 | orchestrator | Git commit: 9f9e405 2025-09-29 05:38:50.252259 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-29 05:38:50.252271 | orchestrator | OS/Arch: linux/amd64 2025-09-29 05:38:50.252282 | orchestrator | Context: default 2025-09-29 05:38:50.252293 | orchestrator | 2025-09-29 05:38:50.252304 | orchestrator | Server: Docker Engine - Community 2025-09-29 05:38:50.252316 | orchestrator | Engine: 2025-09-29 05:38:50.252327 | orchestrator | Version: 27.5.1 2025-09-29 05:38:50.252338 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-29 05:38:50.252379 | orchestrator | Go version: go1.22.11 2025-09-29 05:38:50.252391 | orchestrator | Git commit: 4c9b3b0 2025-09-29 05:38:50.252401 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-29 05:38:50.252412 | orchestrator | OS/Arch: linux/amd64 2025-09-29 05:38:50.252423 | orchestrator | Experimental: false 2025-09-29 05:38:50.252433 | orchestrator | containerd: 2025-09-29 05:38:50.252499 | orchestrator | Version: v1.7.28 2025-09-29 05:38:50.252513 | orchestrator | GitCommit: b98a3aace656320842a23f4a392a33f46af97866 2025-09-29 05:38:50.252524 | orchestrator | runc: 2025-09-29 05:38:50.252535 | orchestrator | Version: 1.3.0 2025-09-29 05:38:50.252546 | orchestrator | GitCommit: v1.3.0-0-g4ca628d1 2025-09-29 05:38:50.252558 | orchestrator | docker-init: 2025-09-29 05:38:50.252568 | orchestrator | Version: 0.19.0 2025-09-29 05:38:50.252580 | orchestrator | GitCommit: de40ad0 2025-09-29 05:38:50.256276 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-29 05:38:50.267183 | orchestrator | + set -e 2025-09-29 05:38:50.267221 | orchestrator | + source /opt/manager-vars.sh 2025-09-29 05:38:50.267233 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-29 05:38:50.267283 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-29 05:38:50.267295 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-29 05:38:50.267306 | orchestrator | ++ CEPH_VERSION=reef 2025-09-29 05:38:50.267322 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-29 05:38:50.267420 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-29 05:38:50.267496 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 05:38:50.267510 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 05:38:50.267555 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-29 05:38:50.267598 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-29 05:38:50.267609 | orchestrator | ++ export ARA=false 2025-09-29 05:38:50.267620 | orchestrator | ++ ARA=false 2025-09-29 05:38:50.267631 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-29 05:38:50.267649 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-29 05:38:50.267690 | orchestrator | ++ export TEMPEST=false 2025-09-29 05:38:50.267732 | orchestrator | ++ TEMPEST=false 2025-09-29 05:38:50.267744 | orchestrator | ++ export IS_ZUUL=true 2025-09-29 05:38:50.267755 | orchestrator | ++ IS_ZUUL=true 2025-09-29 05:38:50.267766 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 05:38:50.267777 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 05:38:50.267788 | orchestrator | ++ export EXTERNAL_API=false 2025-09-29 05:38:50.267799 | orchestrator | ++ EXTERNAL_API=false 2025-09-29 05:38:50.267816 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-29 05:38:50.267827 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-29 05:38:50.267838 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-29 05:38:50.267849 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-29 05:38:50.267860 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-29 05:38:50.267871 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-29 05:38:50.267886 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 05:38:50.267897 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 05:38:50.267907 | orchestrator | ++ INTERACTIVE=false 2025-09-29 05:38:50.267918 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 05:38:50.267933 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 05:38:50.268146 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 05:38:50.268165 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 05:38:50.268177 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-29 05:38:50.275416 | orchestrator | + set -e 2025-09-29 05:38:50.275441 | orchestrator | + VERSION=reef 2025-09-29 05:38:50.276522 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-29 05:38:50.282527 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-29 05:38:50.282552 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-29 05:38:50.289424 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-29 05:38:50.295855 | orchestrator | + set -e 2025-09-29 05:38:50.296290 | orchestrator | + VERSION=2024.2 2025-09-29 05:38:50.297120 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-29 05:38:50.302007 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-29 05:38:50.302142 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-29 05:38:50.307250 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-29 05:38:50.308504 | orchestrator | ++ semver latest 7.0.0 2025-09-29 05:38:50.375680 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-29 05:38:50.375768 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 05:38:50.375783 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-29 05:38:50.375796 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-29 05:38:50.471048 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-29 05:38:50.472131 | orchestrator | + source /opt/venv/bin/activate 2025-09-29 05:38:50.473427 | orchestrator | ++ deactivate nondestructive 2025-09-29 05:38:50.473445 | orchestrator | ++ '[' -n '' ']' 2025-09-29 05:38:50.473479 | orchestrator | ++ '[' -n '' ']' 2025-09-29 05:38:50.473490 | orchestrator | ++ hash -r 2025-09-29 05:38:50.473618 | orchestrator | ++ '[' -n '' ']' 2025-09-29 05:38:50.473633 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-29 05:38:50.473644 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-29 05:38:50.473655 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-29 05:38:50.473948 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-29 05:38:50.473965 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-29 05:38:50.473976 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-29 05:38:50.473987 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-29 05:38:50.473999 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-29 05:38:50.474011 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-29 05:38:50.474076 | orchestrator | ++ export PATH 2025-09-29 05:38:50.474092 | orchestrator | ++ '[' -n '' ']' 2025-09-29 05:38:50.474103 | orchestrator | ++ '[' -z '' ']' 2025-09-29 05:38:50.474170 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-29 05:38:50.474184 | orchestrator | ++ PS1='(venv) ' 2025-09-29 05:38:50.474195 | orchestrator | ++ export PS1 2025-09-29 05:38:50.474206 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-29 05:38:50.474301 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-29 05:38:50.474371 | orchestrator | ++ hash -r 2025-09-29 05:38:50.474573 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-29 05:38:51.850155 | orchestrator | 2025-09-29 05:38:51.850260 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-29 05:38:51.850277 | orchestrator | 2025-09-29 05:38:51.850290 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-29 05:38:52.434722 | orchestrator | ok: [testbed-manager] 2025-09-29 05:38:52.434821 | orchestrator | 2025-09-29 05:38:52.434836 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-29 05:38:53.454002 | orchestrator | changed: [testbed-manager] 2025-09-29 05:38:53.454119 | orchestrator | 2025-09-29 05:38:53.454128 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-29 05:38:53.454134 | orchestrator | 2025-09-29 05:38:53.454139 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:38:55.716790 | orchestrator | ok: [testbed-manager] 2025-09-29 05:38:55.716899 | orchestrator | 2025-09-29 05:38:55.716914 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-29 05:38:55.766211 | orchestrator | ok: [testbed-manager] 2025-09-29 05:38:55.766239 | orchestrator | 2025-09-29 05:38:55.766253 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-29 05:38:56.181953 | orchestrator | changed: [testbed-manager] 2025-09-29 05:38:56.182093 | orchestrator | 2025-09-29 05:38:56.182110 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-29 05:38:56.209545 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:38:56.209571 | orchestrator | 2025-09-29 05:38:56.209583 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-29 05:38:56.521843 | orchestrator | changed: [testbed-manager] 2025-09-29 05:38:56.521930 | orchestrator | 2025-09-29 05:38:56.521944 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-29 05:38:56.569639 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:38:56.569688 | orchestrator | 2025-09-29 05:38:56.569700 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-29 05:38:56.873710 | orchestrator | ok: [testbed-manager] 2025-09-29 05:38:56.873817 | orchestrator | 2025-09-29 05:38:56.873835 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-29 05:38:56.979889 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:38:56.979975 | orchestrator | 2025-09-29 05:38:56.979989 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-29 05:38:56.980001 | orchestrator | 2025-09-29 05:38:56.980017 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:38:58.572255 | orchestrator | ok: [testbed-manager] 2025-09-29 05:38:58.572352 | orchestrator | 2025-09-29 05:38:58.572367 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-29 05:38:58.670593 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-29 05:38:58.670622 | orchestrator | 2025-09-29 05:38:58.670633 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-29 05:38:58.738943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-29 05:38:58.738965 | orchestrator | 2025-09-29 05:38:58.738975 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-29 05:38:59.747257 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-29 05:38:59.747352 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-29 05:38:59.747367 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-29 05:38:59.747379 | orchestrator | 2025-09-29 05:38:59.747392 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-29 05:39:01.432144 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-29 05:39:01.432263 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-29 05:39:01.432281 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-29 05:39:01.432310 | orchestrator | 2025-09-29 05:39:01.433050 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-29 05:39:02.023015 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:39:02.023111 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:02.023127 | orchestrator | 2025-09-29 05:39:02.023140 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-29 05:39:02.616660 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:39:02.616756 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:02.616772 | orchestrator | 2025-09-29 05:39:02.616784 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-29 05:39:02.665221 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:02.665306 | orchestrator | 2025-09-29 05:39:02.665320 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-29 05:39:02.992632 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:02.992739 | orchestrator | 2025-09-29 05:39:02.992756 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-29 05:39:03.047749 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-29 05:39:03.047848 | orchestrator | 2025-09-29 05:39:03.047865 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-29 05:39:03.981094 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:03.981184 | orchestrator | 2025-09-29 05:39:03.981196 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-29 05:39:04.743173 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:04.743272 | orchestrator | 2025-09-29 05:39:04.743288 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-29 05:39:16.839065 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:16.839173 | orchestrator | 2025-09-29 05:39:16.839189 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-29 05:39:16.880725 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:16.880748 | orchestrator | 2025-09-29 05:39:16.880760 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-29 05:39:16.880772 | orchestrator | 2025-09-29 05:39:16.880783 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:39:18.645808 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:18.645910 | orchestrator | 2025-09-29 05:39:18.645953 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-29 05:39:18.760971 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-29 05:39:18.761089 | orchestrator | 2025-09-29 05:39:18.761105 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-29 05:39:18.812100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 05:39:18.812174 | orchestrator | 2025-09-29 05:39:18.812190 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-29 05:39:21.333802 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:21.333904 | orchestrator | 2025-09-29 05:39:21.333919 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-29 05:39:21.391839 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:21.391904 | orchestrator | 2025-09-29 05:39:21.391921 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-29 05:39:21.539008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-29 05:39:21.539075 | orchestrator | 2025-09-29 05:39:21.539089 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-29 05:39:24.418589 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-29 05:39:24.418694 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-29 05:39:24.418709 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-29 05:39:24.418721 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-29 05:39:24.418732 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-29 05:39:24.418743 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-29 05:39:24.418754 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-29 05:39:24.418765 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-29 05:39:24.418777 | orchestrator | 2025-09-29 05:39:24.418790 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-29 05:39:25.054612 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:25.054718 | orchestrator | 2025-09-29 05:39:25.054737 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-29 05:39:25.707365 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:25.707508 | orchestrator | 2025-09-29 05:39:25.707525 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-29 05:39:25.778961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-29 05:39:25.779045 | orchestrator | 2025-09-29 05:39:25.779060 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-29 05:39:27.027314 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-29 05:39:27.027462 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-29 05:39:27.027481 | orchestrator | 2025-09-29 05:39:27.027494 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-29 05:39:27.695300 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:27.695399 | orchestrator | 2025-09-29 05:39:27.695470 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-29 05:39:27.752839 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:27.752930 | orchestrator | 2025-09-29 05:39:27.752947 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-29 05:39:27.834814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-29 05:39:27.834887 | orchestrator | 2025-09-29 05:39:27.834901 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-29 05:39:28.471290 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:28.471385 | orchestrator | 2025-09-29 05:39:28.471401 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-29 05:39:28.532585 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-29 05:39:28.532687 | orchestrator | 2025-09-29 05:39:28.532702 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-29 05:39:29.950726 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:39:29.950828 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:39:29.950843 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:29.950857 | orchestrator | 2025-09-29 05:39:29.950869 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-29 05:39:30.588752 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:30.588845 | orchestrator | 2025-09-29 05:39:30.588859 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-29 05:39:30.642220 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:30.642286 | orchestrator | 2025-09-29 05:39:30.642299 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-29 05:39:30.732552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-29 05:39:30.732613 | orchestrator | 2025-09-29 05:39:30.732627 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-29 05:39:31.316928 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:31.317020 | orchestrator | 2025-09-29 05:39:31.317034 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-29 05:39:31.739708 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:31.739798 | orchestrator | 2025-09-29 05:39:31.739812 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-29 05:39:32.917888 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-29 05:39:32.917979 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-29 05:39:32.917992 | orchestrator | 2025-09-29 05:39:32.918005 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-29 05:39:33.495381 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:33.495521 | orchestrator | 2025-09-29 05:39:33.495538 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-29 05:39:33.867453 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:33.867550 | orchestrator | 2025-09-29 05:39:33.867565 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-29 05:39:34.201472 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:34.201571 | orchestrator | 2025-09-29 05:39:34.201588 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-29 05:39:34.256826 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:34.256910 | orchestrator | 2025-09-29 05:39:34.256924 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-29 05:39:34.334697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-29 05:39:34.334786 | orchestrator | 2025-09-29 05:39:34.334802 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-29 05:39:34.385755 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:34.385826 | orchestrator | 2025-09-29 05:39:34.385840 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-29 05:39:36.274529 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-29 05:39:36.274634 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-29 05:39:36.274665 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-29 05:39:36.274688 | orchestrator | 2025-09-29 05:39:36.274701 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-29 05:39:36.926009 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:36.926161 | orchestrator | 2025-09-29 05:39:36.926179 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-29 05:39:37.669769 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:37.669871 | orchestrator | 2025-09-29 05:39:37.669888 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-29 05:39:38.405357 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:38.405499 | orchestrator | 2025-09-29 05:39:38.405516 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-29 05:39:38.480936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-29 05:39:38.481000 | orchestrator | 2025-09-29 05:39:38.481013 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-29 05:39:38.534295 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:38.534369 | orchestrator | 2025-09-29 05:39:38.534384 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-29 05:39:39.264932 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-29 05:39:39.265031 | orchestrator | 2025-09-29 05:39:39.265047 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-29 05:39:39.361043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-29 05:39:39.361137 | orchestrator | 2025-09-29 05:39:39.361151 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-29 05:39:40.082941 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:40.083036 | orchestrator | 2025-09-29 05:39:40.083051 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-29 05:39:40.677505 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:40.677601 | orchestrator | 2025-09-29 05:39:40.677616 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-29 05:39:40.727883 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:39:40.727947 | orchestrator | 2025-09-29 05:39:40.727961 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-29 05:39:40.785446 | orchestrator | ok: [testbed-manager] 2025-09-29 05:39:40.785511 | orchestrator | 2025-09-29 05:39:40.785524 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-29 05:39:41.602984 | orchestrator | changed: [testbed-manager] 2025-09-29 05:39:41.603080 | orchestrator | 2025-09-29 05:39:41.603095 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-29 05:40:48.311511 | orchestrator | changed: [testbed-manager] 2025-09-29 05:40:48.311632 | orchestrator | 2025-09-29 05:40:48.311650 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-29 05:40:49.204403 | orchestrator | ok: [testbed-manager] 2025-09-29 05:40:49.204494 | orchestrator | 2025-09-29 05:40:49.204508 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-29 05:40:49.281022 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:40:49.281087 | orchestrator | 2025-09-29 05:40:49.281101 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-29 05:40:51.557995 | orchestrator | changed: [testbed-manager] 2025-09-29 05:40:51.558176 | orchestrator | 2025-09-29 05:40:51.558201 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-29 05:40:51.610163 | orchestrator | ok: [testbed-manager] 2025-09-29 05:40:51.610217 | orchestrator | 2025-09-29 05:40:51.610231 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-29 05:40:51.610243 | orchestrator | 2025-09-29 05:40:51.610254 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-29 05:40:51.651598 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:40:51.651645 | orchestrator | 2025-09-29 05:40:51.651657 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-29 05:41:51.702425 | orchestrator | Pausing for 60 seconds 2025-09-29 05:41:51.702530 | orchestrator | changed: [testbed-manager] 2025-09-29 05:41:51.702547 | orchestrator | 2025-09-29 05:41:51.702560 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-29 05:41:56.233115 | orchestrator | changed: [testbed-manager] 2025-09-29 05:41:56.233201 | orchestrator | 2025-09-29 05:41:56.233211 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-29 05:42:37.787487 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-29 05:42:37.787605 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-29 05:42:37.787620 | orchestrator | changed: [testbed-manager] 2025-09-29 05:42:37.787663 | orchestrator | 2025-09-29 05:42:37.787677 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-29 05:42:46.965097 | orchestrator | changed: [testbed-manager] 2025-09-29 05:42:46.965209 | orchestrator | 2025-09-29 05:42:46.965275 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-29 05:42:47.059889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-29 05:42:47.059947 | orchestrator | 2025-09-29 05:42:47.059961 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-29 05:42:47.059973 | orchestrator | 2025-09-29 05:42:47.059985 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-29 05:42:47.110950 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:42:47.111028 | orchestrator | 2025-09-29 05:42:47.111048 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2025-09-29 05:42:47.191170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2025-09-29 05:42:47.191261 | orchestrator | 2025-09-29 05:42:47.191272 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2025-09-29 05:42:48.021329 | orchestrator | changed: [testbed-manager] 2025-09-29 05:42:48.021455 | orchestrator | 2025-09-29 05:42:48.021474 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2025-09-29 05:42:51.966493 | orchestrator | ok: [testbed-manager] 2025-09-29 05:42:51.966594 | orchestrator | 2025-09-29 05:42:51.966610 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2025-09-29 05:42:52.041669 | orchestrator | ok: [testbed-manager] => { 2025-09-29 05:42:52.041750 | orchestrator | "version_check_result.stdout_lines": [ 2025-09-29 05:42:52.041765 | orchestrator | "=== OSISM Container Version Check ===", 2025-09-29 05:42:52.041777 | orchestrator | "Checking running containers against expected versions...", 2025-09-29 05:42:52.041789 | orchestrator | "", 2025-09-29 05:42:52.041801 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2025-09-29 05:42:52.041812 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-29 05:42:52.041823 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.041834 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2025-09-29 05:42:52.041845 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.041856 | orchestrator | "", 2025-09-29 05:42:52.041867 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2025-09-29 05:42:52.041878 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2025-09-29 05:42:52.041889 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.041900 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2025-09-29 05:42:52.041910 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.041921 | orchestrator | "", 2025-09-29 05:42:52.041932 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2025-09-29 05:42:52.041943 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-29 05:42:52.041954 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.041964 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2025-09-29 05:42:52.041975 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.041986 | orchestrator | "", 2025-09-29 05:42:52.041997 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2025-09-29 05:42:52.042008 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-29 05:42:52.042074 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042089 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2025-09-29 05:42:52.042100 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042111 | orchestrator | "", 2025-09-29 05:42:52.042122 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2025-09-29 05:42:52.042133 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-29 05:42:52.042166 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042178 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2025-09-29 05:42:52.042188 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042199 | orchestrator | "", 2025-09-29 05:42:52.042210 | orchestrator | "Checking service: osismclient (OSISM Client)", 2025-09-29 05:42:52.042256 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042269 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042282 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042295 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042307 | orchestrator | "", 2025-09-29 05:42:52.042320 | orchestrator | "Checking service: ara-server (ARA Server)", 2025-09-29 05:42:52.042333 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-29 05:42:52.042345 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042358 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2025-09-29 05:42:52.042370 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042383 | orchestrator | "", 2025-09-29 05:42:52.042396 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2025-09-29 05:42:52.042415 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-29 05:42:52.042428 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042441 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.3", 2025-09-29 05:42:52.042454 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042467 | orchestrator | "", 2025-09-29 05:42:52.042479 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2025-09-29 05:42:52.042492 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2025-09-29 05:42:52.042504 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042522 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2025-09-29 05:42:52.042534 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042548 | orchestrator | "", 2025-09-29 05:42:52.042561 | orchestrator | "Checking service: redis (Redis Cache)", 2025-09-29 05:42:52.042574 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-29 05:42:52.042586 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042597 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.5-alpine", 2025-09-29 05:42:52.042608 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042619 | orchestrator | "", 2025-09-29 05:42:52.042629 | orchestrator | "Checking service: api (OSISM API Service)", 2025-09-29 05:42:52.042640 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042651 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042661 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042672 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042683 | orchestrator | "", 2025-09-29 05:42:52.042693 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2025-09-29 05:42:52.042704 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042714 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042725 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042736 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042747 | orchestrator | "", 2025-09-29 05:42:52.042757 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2025-09-29 05:42:52.042768 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042778 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042789 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042799 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042810 | orchestrator | "", 2025-09-29 05:42:52.042821 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2025-09-29 05:42:52.042831 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042842 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042853 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042872 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042883 | orchestrator | "", 2025-09-29 05:42:52.042893 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2025-09-29 05:42:52.042920 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042931 | orchestrator | " Enabled: true", 2025-09-29 05:42:52.042941 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2025-09-29 05:42:52.042952 | orchestrator | " Status: ✅ MATCH", 2025-09-29 05:42:52.042963 | orchestrator | "", 2025-09-29 05:42:52.042973 | orchestrator | "=== Summary ===", 2025-09-29 05:42:52.042984 | orchestrator | "Errors (version mismatches): 0", 2025-09-29 05:42:52.042995 | orchestrator | "Warnings (expected containers not running): 0", 2025-09-29 05:42:52.043005 | orchestrator | "", 2025-09-29 05:42:52.043016 | orchestrator | "✅ All running containers match expected versions!" 2025-09-29 05:42:52.043027 | orchestrator | ] 2025-09-29 05:42:52.043038 | orchestrator | } 2025-09-29 05:42:52.043050 | orchestrator | 2025-09-29 05:42:52.043061 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2025-09-29 05:42:52.112555 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:42:52.112618 | orchestrator | 2025-09-29 05:42:52.112631 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:42:52.112646 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-29 05:42:52.112657 | orchestrator | 2025-09-29 05:42:52.217698 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-29 05:42:52.217779 | orchestrator | + deactivate 2025-09-29 05:42:52.217795 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-29 05:42:52.217808 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-29 05:42:52.217819 | orchestrator | + export PATH 2025-09-29 05:42:52.217830 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-29 05:42:52.217843 | orchestrator | + '[' -n '' ']' 2025-09-29 05:42:52.217854 | orchestrator | + hash -r 2025-09-29 05:42:52.217865 | orchestrator | + '[' -n '' ']' 2025-09-29 05:42:52.217876 | orchestrator | + unset VIRTUAL_ENV 2025-09-29 05:42:52.217887 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-29 05:42:52.217898 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-29 05:42:52.217910 | orchestrator | + unset -f deactivate 2025-09-29 05:42:52.217921 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-29 05:42:52.225545 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-29 05:42:52.225570 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-29 05:42:52.225581 | orchestrator | + local max_attempts=60 2025-09-29 05:42:52.225592 | orchestrator | + local name=ceph-ansible 2025-09-29 05:42:52.225603 | orchestrator | + local attempt_num=1 2025-09-29 05:42:52.226447 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:42:52.261493 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:42:52.261531 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-29 05:42:52.261543 | orchestrator | + local max_attempts=60 2025-09-29 05:42:52.261554 | orchestrator | + local name=kolla-ansible 2025-09-29 05:42:52.261565 | orchestrator | + local attempt_num=1 2025-09-29 05:42:52.262158 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-29 05:42:52.291847 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:42:52.291904 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-29 05:42:52.291916 | orchestrator | + local max_attempts=60 2025-09-29 05:42:52.291928 | orchestrator | + local name=osism-ansible 2025-09-29 05:42:52.291939 | orchestrator | + local attempt_num=1 2025-09-29 05:42:52.292601 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-29 05:42:52.324152 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:42:52.324257 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-29 05:42:52.324274 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-29 05:42:53.068384 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-29 05:42:53.297552 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-29 05:42:53.297682 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297698 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297709 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-29 05:42:53.297723 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2025-09-29 05:42:53.297735 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297745 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297774 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2025-09-29 05:42:53.297786 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297797 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2025-09-29 05:42:53.297808 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297819 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2025-09-29 05:42:53.297830 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297841 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-29 05:42:53.297852 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.297862 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2025-09-29 05:42:53.304924 | orchestrator | ++ semver latest 7.0.0 2025-09-29 05:42:53.365423 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-29 05:42:53.365494 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 05:42:53.365508 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-29 05:42:53.369693 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-29 05:43:05.614272 | orchestrator | 2025-09-29 05:43:05 | INFO  | Task 7a586ff1-1a92-4c64-935e-030cfa5d71ec (resolvconf) was prepared for execution. 2025-09-29 05:43:05.614384 | orchestrator | 2025-09-29 05:43:05 | INFO  | It takes a moment until task 7a586ff1-1a92-4c64-935e-030cfa5d71ec (resolvconf) has been started and output is visible here. 2025-09-29 05:43:18.737496 | orchestrator | 2025-09-29 05:43:18.737608 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-29 05:43:18.737627 | orchestrator | 2025-09-29 05:43:18.737639 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:43:18.737651 | orchestrator | Monday 29 September 2025 05:43:09 +0000 (0:00:00.135) 0:00:00.135 ****** 2025-09-29 05:43:18.737663 | orchestrator | ok: [testbed-manager] 2025-09-29 05:43:18.737675 | orchestrator | 2025-09-29 05:43:18.737687 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-29 05:43:18.737699 | orchestrator | Monday 29 September 2025 05:43:12 +0000 (0:00:03.516) 0:00:03.651 ****** 2025-09-29 05:43:18.737711 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:43:18.737722 | orchestrator | 2025-09-29 05:43:18.737734 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-29 05:43:18.737745 | orchestrator | Monday 29 September 2025 05:43:12 +0000 (0:00:00.063) 0:00:03.715 ****** 2025-09-29 05:43:18.737756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-29 05:43:18.737768 | orchestrator | 2025-09-29 05:43:18.737780 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-29 05:43:18.737791 | orchestrator | Monday 29 September 2025 05:43:12 +0000 (0:00:00.081) 0:00:03.797 ****** 2025-09-29 05:43:18.737814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 05:43:18.737825 | orchestrator | 2025-09-29 05:43:18.737836 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-29 05:43:18.737847 | orchestrator | Monday 29 September 2025 05:43:12 +0000 (0:00:00.083) 0:00:03.881 ****** 2025-09-29 05:43:18.737859 | orchestrator | ok: [testbed-manager] 2025-09-29 05:43:18.737870 | orchestrator | 2025-09-29 05:43:18.737881 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-29 05:43:18.737892 | orchestrator | Monday 29 September 2025 05:43:13 +0000 (0:00:00.998) 0:00:04.880 ****** 2025-09-29 05:43:18.737903 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:43:18.737914 | orchestrator | 2025-09-29 05:43:18.737925 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-29 05:43:18.737936 | orchestrator | Monday 29 September 2025 05:43:14 +0000 (0:00:00.064) 0:00:04.944 ****** 2025-09-29 05:43:18.737947 | orchestrator | ok: [testbed-manager] 2025-09-29 05:43:18.737958 | orchestrator | 2025-09-29 05:43:18.737969 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-29 05:43:18.737980 | orchestrator | Monday 29 September 2025 05:43:14 +0000 (0:00:00.501) 0:00:05.446 ****** 2025-09-29 05:43:18.737991 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:43:18.738002 | orchestrator | 2025-09-29 05:43:18.738013 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-29 05:43:18.738089 | orchestrator | Monday 29 September 2025 05:43:14 +0000 (0:00:00.079) 0:00:05.525 ****** 2025-09-29 05:43:18.738102 | orchestrator | changed: [testbed-manager] 2025-09-29 05:43:18.738114 | orchestrator | 2025-09-29 05:43:18.738127 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-29 05:43:18.738139 | orchestrator | Monday 29 September 2025 05:43:15 +0000 (0:00:00.548) 0:00:06.073 ****** 2025-09-29 05:43:18.738186 | orchestrator | changed: [testbed-manager] 2025-09-29 05:43:18.738200 | orchestrator | 2025-09-29 05:43:18.738235 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-29 05:43:18.738248 | orchestrator | Monday 29 September 2025 05:43:16 +0000 (0:00:01.109) 0:00:07.183 ****** 2025-09-29 05:43:18.738261 | orchestrator | ok: [testbed-manager] 2025-09-29 05:43:18.738273 | orchestrator | 2025-09-29 05:43:18.738286 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-29 05:43:18.738298 | orchestrator | Monday 29 September 2025 05:43:17 +0000 (0:00:01.025) 0:00:08.208 ****** 2025-09-29 05:43:18.738331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-29 05:43:18.738343 | orchestrator | 2025-09-29 05:43:18.738354 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-29 05:43:18.738365 | orchestrator | Monday 29 September 2025 05:43:17 +0000 (0:00:00.095) 0:00:08.303 ****** 2025-09-29 05:43:18.738376 | orchestrator | changed: [testbed-manager] 2025-09-29 05:43:18.738387 | orchestrator | 2025-09-29 05:43:18.738397 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:43:18.738409 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 05:43:18.738420 | orchestrator | 2025-09-29 05:43:18.738432 | orchestrator | 2025-09-29 05:43:18.738443 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:43:18.738453 | orchestrator | Monday 29 September 2025 05:43:18 +0000 (0:00:01.128) 0:00:09.432 ****** 2025-09-29 05:43:18.738464 | orchestrator | =============================================================================== 2025-09-29 05:43:18.738475 | orchestrator | Gathering Facts --------------------------------------------------------- 3.52s 2025-09-29 05:43:18.738486 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-09-29 05:43:18.738496 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-09-29 05:43:18.738507 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.03s 2025-09-29 05:43:18.738518 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2025-09-29 05:43:18.738528 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-09-29 05:43:18.738558 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-09-29 05:43:18.738569 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-09-29 05:43:18.738580 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-09-29 05:43:18.738591 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-29 05:43:18.738602 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-29 05:43:18.738613 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-29 05:43:18.738624 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-09-29 05:43:19.009395 | orchestrator | + osism apply sshconfig 2025-09-29 05:43:31.161743 | orchestrator | 2025-09-29 05:43:31 | INFO  | Task 5fdde3ef-311c-410b-aa91-560cc4a26c2c (sshconfig) was prepared for execution. 2025-09-29 05:43:31.161855 | orchestrator | 2025-09-29 05:43:31 | INFO  | It takes a moment until task 5fdde3ef-311c-410b-aa91-560cc4a26c2c (sshconfig) has been started and output is visible here. 2025-09-29 05:43:42.432056 | orchestrator | 2025-09-29 05:43:42.432174 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-29 05:43:42.432191 | orchestrator | 2025-09-29 05:43:42.432249 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-29 05:43:42.432261 | orchestrator | Monday 29 September 2025 05:43:35 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-29 05:43:42.432273 | orchestrator | ok: [testbed-manager] 2025-09-29 05:43:42.432285 | orchestrator | 2025-09-29 05:43:42.432297 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-29 05:43:42.432307 | orchestrator | Monday 29 September 2025 05:43:35 +0000 (0:00:00.571) 0:00:00.734 ****** 2025-09-29 05:43:42.432318 | orchestrator | changed: [testbed-manager] 2025-09-29 05:43:42.432330 | orchestrator | 2025-09-29 05:43:42.432341 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-29 05:43:42.432351 | orchestrator | Monday 29 September 2025 05:43:36 +0000 (0:00:00.530) 0:00:01.264 ****** 2025-09-29 05:43:42.432388 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-29 05:43:42.432399 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-29 05:43:42.432410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-29 05:43:42.432421 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-29 05:43:42.432431 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-29 05:43:42.432442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-29 05:43:42.432453 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-29 05:43:42.432463 | orchestrator | 2025-09-29 05:43:42.432474 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-29 05:43:42.432485 | orchestrator | Monday 29 September 2025 05:43:41 +0000 (0:00:05.354) 0:00:06.619 ****** 2025-09-29 05:43:42.432496 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:43:42.432506 | orchestrator | 2025-09-29 05:43:42.432517 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-29 05:43:42.432528 | orchestrator | Monday 29 September 2025 05:43:41 +0000 (0:00:00.063) 0:00:06.682 ****** 2025-09-29 05:43:42.432538 | orchestrator | changed: [testbed-manager] 2025-09-29 05:43:42.432549 | orchestrator | 2025-09-29 05:43:42.432560 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:43:42.432571 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:43:42.432583 | orchestrator | 2025-09-29 05:43:42.432594 | orchestrator | 2025-09-29 05:43:42.432608 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:43:42.432620 | orchestrator | Monday 29 September 2025 05:43:42 +0000 (0:00:00.587) 0:00:07.269 ****** 2025-09-29 05:43:42.432633 | orchestrator | =============================================================================== 2025-09-29 05:43:42.432646 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.35s 2025-09-29 05:43:42.432659 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-09-29 05:43:42.432671 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-09-29 05:43:42.432683 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-09-29 05:43:42.432696 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-09-29 05:43:42.716171 | orchestrator | + osism apply known-hosts 2025-09-29 05:43:54.705961 | orchestrator | 2025-09-29 05:43:54 | INFO  | Task 8f40bd5e-4019-4dfa-9a35-c00a2277ff80 (known-hosts) was prepared for execution. 2025-09-29 05:43:54.706111 | orchestrator | 2025-09-29 05:43:54 | INFO  | It takes a moment until task 8f40bd5e-4019-4dfa-9a35-c00a2277ff80 (known-hosts) has been started and output is visible here. 2025-09-29 05:44:11.551372 | orchestrator | 2025-09-29 05:44:11.551490 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-29 05:44:11.551506 | orchestrator | 2025-09-29 05:44:11.551518 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-29 05:44:11.551530 | orchestrator | Monday 29 September 2025 05:43:58 +0000 (0:00:00.182) 0:00:00.182 ****** 2025-09-29 05:44:11.551542 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-29 05:44:11.551554 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-29 05:44:11.551565 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-29 05:44:11.551576 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-29 05:44:11.551586 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-29 05:44:11.551597 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-29 05:44:11.551608 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-29 05:44:11.551618 | orchestrator | 2025-09-29 05:44:11.551629 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-29 05:44:11.551665 | orchestrator | Monday 29 September 2025 05:44:04 +0000 (0:00:05.744) 0:00:05.927 ****** 2025-09-29 05:44:11.551688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-29 05:44:11.551701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-29 05:44:11.551712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-29 05:44:11.551724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-29 05:44:11.551734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-29 05:44:11.551746 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-29 05:44:11.551757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-29 05:44:11.551768 | orchestrator | 2025-09-29 05:44:11.551779 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.551790 | orchestrator | Monday 29 September 2025 05:44:04 +0000 (0:00:00.154) 0:00:06.081 ****** 2025-09-29 05:44:11.551802 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6SFS9KsAeJ17dbkIdivonRl1ImZLkMEgkQgX6JLWDUrXiSauWhssWs655hBH0qRU1onb4m00ckkHVzx38ddqA=) 2025-09-29 05:44:11.551815 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMUufr4L9NnVwAMUlqwJm5jgmDWGEOalqdVRL8lx0XXf) 2025-09-29 05:44:11.551830 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXUdBl9VIj5VKL6YoWaExZNu6Nr2Af7AGITHYE2/dAyQNfLC5A574dDbDlgPE9SnPCyJ+RAlVfzGwDtgxSXy0HA3bogJcDGf6XXTKOAa3qFk+H5JnH43cI1/j/+5beeLE4HdjZuJPVGBp/2ViLu35q05gWuRcrwZd4DbBSccBSrNIF5tgVPbHDsyczNdsuWlRPZqK1kW/fcwc0mL4QILTaZqzT+Tx5kWAynWrFm4TsK4DDapFUYPYKeHwbxDLlghy6L/bAGIm6K1s3BQHUgBqV0ddqVCmd0jaUIRNQfD1P91cYRcEZ8CVO5RkMLqNbkVE8nessopfy5GlmSEe9cn1Lh8K5YcqXZKT7MhC6K2cpjAiSNP54HdivJLCoBDQB+w7Itgvdlv+4e6LViOOl0zeeGR4axp2QLauwlMqAAsmz4cHRFBfVvfWuUXU4PO7v4DZD/A8PDbjYcXHLPCtnKwkYHs/GjduY64FeIUjuCSncekSwD0iGz+FPkZr1FD7GH+0=) 2025-09-29 05:44:11.551845 | orchestrator | 2025-09-29 05:44:11.551856 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.551867 | orchestrator | Monday 29 September 2025 05:44:05 +0000 (0:00:01.068) 0:00:07.150 ****** 2025-09-29 05:44:11.551901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1AlxqkYYu5zU3WHRFv3/U1fdYz0UKTKnwoHKhUpq3PDK40/G7CfGWCu19OY68R+z3kP2L4l+rBvHN1uzUv8V8xIZ682qDQi3FGEtioJrXI7bBWbySb+FXYASd6czXu7qqzK31cNgMIQhl7UMdVDlMC5nXxrNccgEAfqz2O82+GwKQhTRWIdRT028K0P013H8CTJqoaPTr5Qgk/NNmLCopuHsA+a4HwE/ttHvkQFBupRGSQkneda73tweWJOzTZr6AVdv/MVE0LsS1/keYW/MhWNTLDDrIlkXvcVElqAMn1f+w9iGJUgh9afHphMtIxu3VmGIJniopIWD8QND0zO84ENKT1/++nCcE/AhZX5C5TI3fihzLoRMJdVoMM5uMtuyJI07N2QtJ2ZBzuTxG24XN2jFtJUhILUqC+pzYsew3JbB9WjwwpePbuz4/damAMOIeLqRWVcVqmKWLhPDYTiuxNRzMhb+0OH56W7zqGJDTG8U4krLjvUfoyUmGn3yp7fE=) 2025-09-29 05:44:11.551915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEuvbq55b3h+ywupHC3ynDzN+h8YI6SlZv2ibB7i3lz6bDNnrXJjMG3poJ2AVe07R9oW/IDS3KPJ9aN8MkqfZ6g=) 2025-09-29 05:44:11.551935 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBMyUXDd39/OLsY0tNKkmfl83F1iP+tyXMnm6e5alO7J) 2025-09-29 05:44:11.551946 | orchestrator | 2025-09-29 05:44:11.551958 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.551968 | orchestrator | Monday 29 September 2025 05:44:06 +0000 (0:00:00.959) 0:00:08.109 ****** 2025-09-29 05:44:11.551979 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMd/sAulQ5pgsY0Lw7ih31a3qxn/Q/U3oVdXxcQ71JOt) 2025-09-29 05:44:11.551991 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeocmmJ7R2xv4MQzPwHzE30PicaPF6YXG3UdeBDhDCI8lJYN751yHIhy5qgdALEAEuEO+w+8Djwnd8JE9uLA1R7hK54ZtutpLuDHBR9xp7J9W7uWLCC0IdASG77MrZMxZokfsipdL9o0aW+lH6RyjAr8ULGDWaZDHZ/RpeawL+6AmjARAZct+8YsRKYH363Rl6qZmJYeez1VzaBsQAq4hlCAsQsPNbDRzX3FwEyGn9/ATVTGMN43O2olKNlfO+4mb4R4IlNopfK/BvB9M0UspNMxxnP1VAwbDtLZen7EED7C8u2Otb+MHUlsAWMIX/CF0fpt5jfJF1jJ3sI5i21KpAesbhAM9T1z2n/nGJsdxJLOdM3JvqkgaHyohvgofIwsMEZ8qJD6gOmFUWiW+SbVS5tAp7Ei1vsOI3VcgV3rXnpKyoHoDj5oZ4Y1TywRehIf+9pEAnFUayOvt6qDtA/tloHJNAICvrdcYgYqpEO7EayUNGapl99Ve01oW4xxQvlh8=) 2025-09-29 05:44:11.552064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH6kccSaed2/GtoCYqZ7V2J8H6iipTxLAeESknsLZv0mRhCjcLF+iLVsAzBraH+fiJpqs8d6ORFspk4XZ0qhoUE=) 2025-09-29 05:44:11.552076 | orchestrator | 2025-09-29 05:44:11.552087 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.552098 | orchestrator | Monday 29 September 2025 05:44:07 +0000 (0:00:00.941) 0:00:09.050 ****** 2025-09-29 05:44:11.552113 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdzZQPIwa/0tzvQLaoXDn2QX+7KWPyh3/qLPgxj1zvnCorxl/BxLiEkdp5VyQm5+gJbpDtpRKSVvyp9j3BNs8g=) 2025-09-29 05:44:11.552125 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjDQrpiLWRLmv4IfxuFNHqgo9bUN0nCgysOZKkCYq6YN5Euy3VWnmPQpNW90184E1ZNHJxv4jWiFKNDNJCaJOflVxiM9vsILmR9CxCdbPhounmfnb8uJeoHOG2cFjvpXBXrFCdd6/yHne0PJo7RFDH4zFIjosJLIEH88vMUqcPLADntmtknT10mo5oucK587EWzlj8K2GB1ck2M1Myw8kckyY9DaGHceGcaMj9c7ytqibMRX2wUbxQPj4ig25gxujX5qX/+YEJgCzUlfORRGCnibFnRXsjX9vy4Z7uWC134/pvDqDDEVAwS451P9Ck7sYdS6Pn8/xSrpkrDDV3PLQQ4fIjjDZF5cm3eBGN9GuFqRvrul/KK1GHv/tvhiTASArmxdcQbB5cE93ucUFasFBO7iwwvoLXYsX2BFzj+z9YmpY8e30Jfr17lSo2a1bD2dwffZw7dmJ//B67NNl7U/bScL/4h0uuJU3hPQtD7QF+ejrkSlddmDCY8JZHfAOpo0M=) 2025-09-29 05:44:11.552137 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqS8fvxL/CJ/nzpLFtsg0KAjaPEkLvRHvxLAbwaKY7y) 2025-09-29 05:44:11.552148 | orchestrator | 2025-09-29 05:44:11.552159 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.552170 | orchestrator | Monday 29 September 2025 05:44:09 +0000 (0:00:01.974) 0:00:11.024 ****** 2025-09-29 05:44:11.552181 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNDsDKX0fnhZ133ETTzwAB+yO6UbIaY7on2x1H/j4O+VbpAacI/CM2XbbWEBXiCrhq2Uvqxc+41DYvF47HQ5ZAt3GEhviIrsYA895NxIMnw/I7sgoxSR16f/9VJBPl2orjVj1ASKIqa+G4tipDh7Ld4iCFF5WTlQjMm7QPPUA9jDaddkCAfUzG6r9bTnoC+Yr8eMaz4JpY/flvZFKVOfhdFx6F1D0mBYD6OtADjI37weJ47GTP2CXXwY92C1U0Ynsq0FUZvr3Pq+2Sg9MlZ2P7dTh1zqiAHehzbOyCfVjOcL2Ve225XNtM8vnSk3RnSVY+Be5gy6KG7isAre/crVhM+W/42smjFiXp+BZekVV/xHjo07vcOqD0QyKRrc7K+EWU+eUemodYDqHoZTD8sv++Lr4zJ3cxg1JHS49v1r5v8O5QutDT+I2ez4wgZWFgOD55c98zcye+dk9eNe3n5ni1PvL6HGE3hVuNGEt53sLQDFONg5yDgKD06u+H5nmvnJM=) 2025-09-29 05:44:11.552222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcYxKEOc8cAHq638lgB12gIbrAIpScW2ZykzXdxRZIinmFEO1sC7N+E1CIU3O2vBENJPJA6ffEBJpO/Ns6aq3Y=) 2025-09-29 05:44:11.552240 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVe6ICtAmLU8BhgJTnpJ5J0be5emTwE20TPgvpe91Nk) 2025-09-29 05:44:11.552251 | orchestrator | 2025-09-29 05:44:11.552262 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:11.552273 | orchestrator | Monday 29 September 2025 05:44:10 +0000 (0:00:00.960) 0:00:11.985 ****** 2025-09-29 05:44:11.552294 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsjGM9JhMflOphrllIVaKM2vtPn9DVWyCyY9f2T6CPV7USi1+ODzwC/Ot8ia3eYA1tkqfCWZLwcpNDgmM3vLn/jzf1lOgdv8mSvrbR5BPCQUAcQOa7LNbb1sBZrvuNOO8mD3uc+OE93TXUUZB6tlh6DglvLz/i8JRscBkm84k7EpxZjOHkoA4vF6sxAWS9JmZ3rSYmpAUDu4Lox2eeVpISS+piM92juHn8FHfZaHPYIoUF9Hmeh1unpALROKcXPrrdg9gi/5rwMYi8kFEw4tmV5LELp2mHf1plAWMw51U//7AHRGUKJfw9Ffgwv0nckA0N+GA5AT9HdCle7U8G85zCEa3kKj39lf70RnZIIiPZSLabi2aVlJ9SDIkr2oeWtsWcZ8XIF/3S+7FW3XYGvwy/tpCrG/N7LIClqDNPn4xgmnfT/mPnL4jxAnX1hcaQ75BIDJmhmVIfmPycHM6HRS87dc507M6bsiO79LAibHrRP+ePGzrLFGIHB9xigIw+TM0=) 2025-09-29 05:44:21.474469 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKn5PZE46Lq9VRQztLtmP9ZixxYfHe9eqqcnRg+mN1S8DuGGQAcF6zlfu6w3/lV04hqoRmAd9ORM/Bu/T/LLHCM=) 2025-09-29 05:44:21.474584 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIsdljUQKdf7TAceyRZXo6iLUUw5vdwtTmZBNKlwEEz6) 2025-09-29 05:44:21.474602 | orchestrator | 2025-09-29 05:44:21.474615 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:21.474628 | orchestrator | Monday 29 September 2025 05:44:11 +0000 (0:00:01.005) 0:00:12.991 ****** 2025-09-29 05:44:21.474639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2/Ln3HTTQToe9RcOVFGrRJKtjBfU7eEGmh6aHZH74nQ0C/jewFI8P7Pl7vzh7q04pwpkYMIJ/cdkwzE8vM3/U=) 2025-09-29 05:44:21.474653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC41/eW9jG0DgMUPJSDaAoorH6mjneTwNyqJubQ8D5vJZHXmudLNDs7R1Ltkc1QF8OngFjCfeShaWxY901zvRjRrw8wbugr8ZzcVot5SHeMcOHCVe4R5wNz/bKVgoqRq67q9PTT0zlGIqezB1r2+UTwZdRFpLIcZpH4NPX5kgBgdrBtV7BQouuQRd6j/s+mXwyMGeZ8kT8dvTiIEXof/duiCbrJO42N9eJKnYhLSJrYKfRlry/5Dpki5MgY+sJlyjSIwGk7mMAYfGoekvoJBcLbIV2xhB4CbgdgCNYvTrHW6I5pMZgZRvAcArjuZLgPovjMG7SQjrmc9gtmT/FqMqwtt/GxuQvEqRkqp7QFNjARW0PgKsUMclU++QlWEhJDOCId4hbiXN9AN4O/DlHnv+s10YZMJDlgu5RzIO2xW8xMiPADB9K/ttR1JPBapfGE6sP+PTO64+AGm+kAhTips9rq3C6qvU9EIyJVgtt/LlhNln6JTpi18wPfLTKnmXx0BHU=) 2025-09-29 05:44:21.474668 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID3mnzcq4QSC6Ir1JToaBWAfy0zx5sOoLTQKCi100vmA) 2025-09-29 05:44:21.474688 | orchestrator | 2025-09-29 05:44:21.474708 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-29 05:44:21.474727 | orchestrator | Monday 29 September 2025 05:44:12 +0000 (0:00:00.954) 0:00:13.945 ****** 2025-09-29 05:44:21.474746 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-29 05:44:21.474766 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-29 05:44:21.474784 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-29 05:44:21.474803 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-29 05:44:21.474824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-29 05:44:21.474866 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-29 05:44:21.474879 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-29 05:44:21.474890 | orchestrator | 2025-09-29 05:44:21.474901 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-29 05:44:21.474913 | orchestrator | Monday 29 September 2025 05:44:17 +0000 (0:00:04.951) 0:00:18.897 ****** 2025-09-29 05:44:21.474925 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-29 05:44:21.474963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-29 05:44:21.474974 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-29 05:44:21.474985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-29 05:44:21.474999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-29 05:44:21.475012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-29 05:44:21.475025 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-29 05:44:21.475037 | orchestrator | 2025-09-29 05:44:21.475050 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:21.475063 | orchestrator | Monday 29 September 2025 05:44:17 +0000 (0:00:00.156) 0:00:19.054 ****** 2025-09-29 05:44:21.475076 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMUufr4L9NnVwAMUlqwJm5jgmDWGEOalqdVRL8lx0XXf) 2025-09-29 05:44:21.475116 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDXUdBl9VIj5VKL6YoWaExZNu6Nr2Af7AGITHYE2/dAyQNfLC5A574dDbDlgPE9SnPCyJ+RAlVfzGwDtgxSXy0HA3bogJcDGf6XXTKOAa3qFk+H5JnH43cI1/j/+5beeLE4HdjZuJPVGBp/2ViLu35q05gWuRcrwZd4DbBSccBSrNIF5tgVPbHDsyczNdsuWlRPZqK1kW/fcwc0mL4QILTaZqzT+Tx5kWAynWrFm4TsK4DDapFUYPYKeHwbxDLlghy6L/bAGIm6K1s3BQHUgBqV0ddqVCmd0jaUIRNQfD1P91cYRcEZ8CVO5RkMLqNbkVE8nessopfy5GlmSEe9cn1Lh8K5YcqXZKT7MhC6K2cpjAiSNP54HdivJLCoBDQB+w7Itgvdlv+4e6LViOOl0zeeGR4axp2QLauwlMqAAsmz4cHRFBfVvfWuUXU4PO7v4DZD/A8PDbjYcXHLPCtnKwkYHs/GjduY64FeIUjuCSncekSwD0iGz+FPkZr1FD7GH+0=) 2025-09-29 05:44:21.475131 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6SFS9KsAeJ17dbkIdivonRl1ImZLkMEgkQgX6JLWDUrXiSauWhssWs655hBH0qRU1onb4m00ckkHVzx38ddqA=) 2025-09-29 05:44:21.475144 | orchestrator | 2025-09-29 05:44:21.475155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:21.475166 | orchestrator | Monday 29 September 2025 05:44:18 +0000 (0:00:00.982) 0:00:20.037 ****** 2025-09-29 05:44:21.475177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEuvbq55b3h+ywupHC3ynDzN+h8YI6SlZv2ibB7i3lz6bDNnrXJjMG3poJ2AVe07R9oW/IDS3KPJ9aN8MkqfZ6g=) 2025-09-29 05:44:21.475219 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1AlxqkYYu5zU3WHRFv3/U1fdYz0UKTKnwoHKhUpq3PDK40/G7CfGWCu19OY68R+z3kP2L4l+rBvHN1uzUv8V8xIZ682qDQi3FGEtioJrXI7bBWbySb+FXYASd6czXu7qqzK31cNgMIQhl7UMdVDlMC5nXxrNccgEAfqz2O82+GwKQhTRWIdRT028K0P013H8CTJqoaPTr5Qgk/NNmLCopuHsA+a4HwE/ttHvkQFBupRGSQkneda73tweWJOzTZr6AVdv/MVE0LsS1/keYW/MhWNTLDDrIlkXvcVElqAMn1f+w9iGJUgh9afHphMtIxu3VmGIJniopIWD8QND0zO84ENKT1/++nCcE/AhZX5C5TI3fihzLoRMJdVoMM5uMtuyJI07N2QtJ2ZBzuTxG24XN2jFtJUhILUqC+pzYsew3JbB9WjwwpePbuz4/damAMOIeLqRWVcVqmKWLhPDYTiuxNRzMhb+0OH56W7zqGJDTG8U4krLjvUfoyUmGn3yp7fE=) 2025-09-29 05:44:21.475231 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBMyUXDd39/OLsY0tNKkmfl83F1iP+tyXMnm6e5alO7J) 2025-09-29 05:44:21.475242 | orchestrator | 2025-09-29 05:44:21.475253 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:21.475273 | orchestrator | Monday 29 September 2025 05:44:19 +0000 (0:00:00.942) 0:00:20.979 ****** 2025-09-29 05:44:21.475284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH6kccSaed2/GtoCYqZ7V2J8H6iipTxLAeESknsLZv0mRhCjcLF+iLVsAzBraH+fiJpqs8d6ORFspk4XZ0qhoUE=) 2025-09-29 05:44:21.475295 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDeocmmJ7R2xv4MQzPwHzE30PicaPF6YXG3UdeBDhDCI8lJYN751yHIhy5qgdALEAEuEO+w+8Djwnd8JE9uLA1R7hK54ZtutpLuDHBR9xp7J9W7uWLCC0IdASG77MrZMxZokfsipdL9o0aW+lH6RyjAr8ULGDWaZDHZ/RpeawL+6AmjARAZct+8YsRKYH363Rl6qZmJYeez1VzaBsQAq4hlCAsQsPNbDRzX3FwEyGn9/ATVTGMN43O2olKNlfO+4mb4R4IlNopfK/BvB9M0UspNMxxnP1VAwbDtLZen7EED7C8u2Otb+MHUlsAWMIX/CF0fpt5jfJF1jJ3sI5i21KpAesbhAM9T1z2n/nGJsdxJLOdM3JvqkgaHyohvgofIwsMEZ8qJD6gOmFUWiW+SbVS5tAp7Ei1vsOI3VcgV3rXnpKyoHoDj5oZ4Y1TywRehIf+9pEAnFUayOvt6qDtA/tloHJNAICvrdcYgYqpEO7EayUNGapl99Ve01oW4xxQvlh8=) 2025-09-29 05:44:21.475306 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMd/sAulQ5pgsY0Lw7ih31a3qxn/Q/U3oVdXxcQ71JOt) 2025-09-29 05:44:21.475317 | orchestrator | 2025-09-29 05:44:21.475328 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:21.475339 | orchestrator | Monday 29 September 2025 05:44:20 +0000 (0:00:00.989) 0:00:21.969 ****** 2025-09-29 05:44:21.475356 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDjDQrpiLWRLmv4IfxuFNHqgo9bUN0nCgysOZKkCYq6YN5Euy3VWnmPQpNW90184E1ZNHJxv4jWiFKNDNJCaJOflVxiM9vsILmR9CxCdbPhounmfnb8uJeoHOG2cFjvpXBXrFCdd6/yHne0PJo7RFDH4zFIjosJLIEH88vMUqcPLADntmtknT10mo5oucK587EWzlj8K2GB1ck2M1Myw8kckyY9DaGHceGcaMj9c7ytqibMRX2wUbxQPj4ig25gxujX5qX/+YEJgCzUlfORRGCnibFnRXsjX9vy4Z7uWC134/pvDqDDEVAwS451P9Ck7sYdS6Pn8/xSrpkrDDV3PLQQ4fIjjDZF5cm3eBGN9GuFqRvrul/KK1GHv/tvhiTASArmxdcQbB5cE93ucUFasFBO7iwwvoLXYsX2BFzj+z9YmpY8e30Jfr17lSo2a1bD2dwffZw7dmJ//B67NNl7U/bScL/4h0uuJU3hPQtD7QF+ejrkSlddmDCY8JZHfAOpo0M=) 2025-09-29 05:44:21.475368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAdzZQPIwa/0tzvQLaoXDn2QX+7KWPyh3/qLPgxj1zvnCorxl/BxLiEkdp5VyQm5+gJbpDtpRKSVvyp9j3BNs8g=) 2025-09-29 05:44:21.475390 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDqS8fvxL/CJ/nzpLFtsg0KAjaPEkLvRHvxLAbwaKY7y) 2025-09-29 05:44:25.417778 | orchestrator | 2025-09-29 05:44:25.417879 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:25.417895 | orchestrator | Monday 29 September 2025 05:44:21 +0000 (0:00:00.946) 0:00:22.916 ****** 2025-09-29 05:44:25.417909 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcYxKEOc8cAHq638lgB12gIbrAIpScW2ZykzXdxRZIinmFEO1sC7N+E1CIU3O2vBENJPJA6ffEBJpO/Ns6aq3Y=) 2025-09-29 05:44:25.417925 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNDsDKX0fnhZ133ETTzwAB+yO6UbIaY7on2x1H/j4O+VbpAacI/CM2XbbWEBXiCrhq2Uvqxc+41DYvF47HQ5ZAt3GEhviIrsYA895NxIMnw/I7sgoxSR16f/9VJBPl2orjVj1ASKIqa+G4tipDh7Ld4iCFF5WTlQjMm7QPPUA9jDaddkCAfUzG6r9bTnoC+Yr8eMaz4JpY/flvZFKVOfhdFx6F1D0mBYD6OtADjI37weJ47GTP2CXXwY92C1U0Ynsq0FUZvr3Pq+2Sg9MlZ2P7dTh1zqiAHehzbOyCfVjOcL2Ve225XNtM8vnSk3RnSVY+Be5gy6KG7isAre/crVhM+W/42smjFiXp+BZekVV/xHjo07vcOqD0QyKRrc7K+EWU+eUemodYDqHoZTD8sv++Lr4zJ3cxg1JHS49v1r5v8O5QutDT+I2ez4wgZWFgOD55c98zcye+dk9eNe3n5ni1PvL6HGE3hVuNGEt53sLQDFONg5yDgKD06u+H5nmvnJM=) 2025-09-29 05:44:25.417941 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVe6ICtAmLU8BhgJTnpJ5J0be5emTwE20TPgvpe91Nk) 2025-09-29 05:44:25.417953 | orchestrator | 2025-09-29 05:44:25.417965 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:25.417995 | orchestrator | Monday 29 September 2025 05:44:22 +0000 (0:00:00.964) 0:00:23.880 ****** 2025-09-29 05:44:25.418080 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKn5PZE46Lq9VRQztLtmP9ZixxYfHe9eqqcnRg+mN1S8DuGGQAcF6zlfu6w3/lV04hqoRmAd9ORM/Bu/T/LLHCM=) 2025-09-29 05:44:25.418095 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsjGM9JhMflOphrllIVaKM2vtPn9DVWyCyY9f2T6CPV7USi1+ODzwC/Ot8ia3eYA1tkqfCWZLwcpNDgmM3vLn/jzf1lOgdv8mSvrbR5BPCQUAcQOa7LNbb1sBZrvuNOO8mD3uc+OE93TXUUZB6tlh6DglvLz/i8JRscBkm84k7EpxZjOHkoA4vF6sxAWS9JmZ3rSYmpAUDu4Lox2eeVpISS+piM92juHn8FHfZaHPYIoUF9Hmeh1unpALROKcXPrrdg9gi/5rwMYi8kFEw4tmV5LELp2mHf1plAWMw51U//7AHRGUKJfw9Ffgwv0nckA0N+GA5AT9HdCle7U8G85zCEa3kKj39lf70RnZIIiPZSLabi2aVlJ9SDIkr2oeWtsWcZ8XIF/3S+7FW3XYGvwy/tpCrG/N7LIClqDNPn4xgmnfT/mPnL4jxAnX1hcaQ75BIDJmhmVIfmPycHM6HRS87dc507M6bsiO79LAibHrRP+ePGzrLFGIHB9xigIw+TM0=) 2025-09-29 05:44:25.418107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIsdljUQKdf7TAceyRZXo6iLUUw5vdwtTmZBNKlwEEz6) 2025-09-29 05:44:25.418118 | orchestrator | 2025-09-29 05:44:25.418129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-29 05:44:25.418140 | orchestrator | Monday 29 September 2025 05:44:23 +0000 (0:00:00.994) 0:00:24.875 ****** 2025-09-29 05:44:25.418151 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM2/Ln3HTTQToe9RcOVFGrRJKtjBfU7eEGmh6aHZH74nQ0C/jewFI8P7Pl7vzh7q04pwpkYMIJ/cdkwzE8vM3/U=) 2025-09-29 05:44:25.418162 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC41/eW9jG0DgMUPJSDaAoorH6mjneTwNyqJubQ8D5vJZHXmudLNDs7R1Ltkc1QF8OngFjCfeShaWxY901zvRjRrw8wbugr8ZzcVot5SHeMcOHCVe4R5wNz/bKVgoqRq67q9PTT0zlGIqezB1r2+UTwZdRFpLIcZpH4NPX5kgBgdrBtV7BQouuQRd6j/s+mXwyMGeZ8kT8dvTiIEXof/duiCbrJO42N9eJKnYhLSJrYKfRlry/5Dpki5MgY+sJlyjSIwGk7mMAYfGoekvoJBcLbIV2xhB4CbgdgCNYvTrHW6I5pMZgZRvAcArjuZLgPovjMG7SQjrmc9gtmT/FqMqwtt/GxuQvEqRkqp7QFNjARW0PgKsUMclU++QlWEhJDOCId4hbiXN9AN4O/DlHnv+s10YZMJDlgu5RzIO2xW8xMiPADB9K/ttR1JPBapfGE6sP+PTO64+AGm+kAhTips9rq3C6qvU9EIyJVgtt/LlhNln6JTpi18wPfLTKnmXx0BHU=) 2025-09-29 05:44:25.418174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID3mnzcq4QSC6Ir1JToaBWAfy0zx5sOoLTQKCi100vmA) 2025-09-29 05:44:25.418231 | orchestrator | 2025-09-29 05:44:25.418244 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-29 05:44:25.418255 | orchestrator | Monday 29 September 2025 05:44:24 +0000 (0:00:00.989) 0:00:25.865 ****** 2025-09-29 05:44:25.418266 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-29 05:44:25.418278 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-29 05:44:25.418288 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-29 05:44:25.418299 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-29 05:44:25.418310 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-29 05:44:25.418322 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-29 05:44:25.418335 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-29 05:44:25.418347 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:44:25.418360 | orchestrator | 2025-09-29 05:44:25.418391 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-29 05:44:25.418405 | orchestrator | Monday 29 September 2025 05:44:24 +0000 (0:00:00.152) 0:00:26.017 ****** 2025-09-29 05:44:25.418417 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:44:25.418429 | orchestrator | 2025-09-29 05:44:25.418442 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-29 05:44:25.418454 | orchestrator | Monday 29 September 2025 05:44:24 +0000 (0:00:00.055) 0:00:26.072 ****** 2025-09-29 05:44:25.418466 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:44:25.418478 | orchestrator | 2025-09-29 05:44:25.418499 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-29 05:44:25.418512 | orchestrator | Monday 29 September 2025 05:44:24 +0000 (0:00:00.048) 0:00:26.121 ****** 2025-09-29 05:44:25.418524 | orchestrator | changed: [testbed-manager] 2025-09-29 05:44:25.418536 | orchestrator | 2025-09-29 05:44:25.418549 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:44:25.418561 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 05:44:25.418575 | orchestrator | 2025-09-29 05:44:25.418587 | orchestrator | 2025-09-29 05:44:25.418600 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:44:25.418612 | orchestrator | Monday 29 September 2025 05:44:25 +0000 (0:00:00.567) 0:00:26.688 ****** 2025-09-29 05:44:25.418625 | orchestrator | =============================================================================== 2025-09-29 05:44:25.418637 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.74s 2025-09-29 05:44:25.418650 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.95s 2025-09-29 05:44:25.418664 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.97s 2025-09-29 05:44:25.418675 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-29 05:44:25.418686 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-09-29 05:44:25.418697 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-29 05:44:25.418707 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-29 05:44:25.418718 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-29 05:44:25.418728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-09-29 05:44:25.418739 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-09-29 05:44:25.418758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-09-29 05:44:25.418769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-09-29 05:44:25.418780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-09-29 05:44:25.418790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-09-29 05:44:25.418801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-09-29 05:44:25.418811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-09-29 05:44:25.418822 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.57s 2025-09-29 05:44:25.418833 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-29 05:44:25.418844 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2025-09-29 05:44:25.418854 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-09-29 05:44:25.616216 | orchestrator | + osism apply squid 2025-09-29 05:44:37.399915 | orchestrator | 2025-09-29 05:44:37 | INFO  | Task ef38fe71-ab72-44db-93d0-cefaea00a1c4 (squid) was prepared for execution. 2025-09-29 05:44:37.400026 | orchestrator | 2025-09-29 05:44:37 | INFO  | It takes a moment until task ef38fe71-ab72-44db-93d0-cefaea00a1c4 (squid) has been started and output is visible here. 2025-09-29 05:46:32.088605 | orchestrator | 2025-09-29 05:46:32.088713 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-29 05:46:32.088727 | orchestrator | 2025-09-29 05:46:32.088737 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-29 05:46:32.088746 | orchestrator | Monday 29 September 2025 05:44:41 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-29 05:46:32.088756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 05:46:32.088789 | orchestrator | 2025-09-29 05:46:32.088799 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-29 05:46:32.088808 | orchestrator | Monday 29 September 2025 05:44:41 +0000 (0:00:00.115) 0:00:00.287 ****** 2025-09-29 05:46:32.088817 | orchestrator | ok: [testbed-manager] 2025-09-29 05:46:32.088827 | orchestrator | 2025-09-29 05:46:32.088835 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-29 05:46:32.088844 | orchestrator | Monday 29 September 2025 05:44:42 +0000 (0:00:01.514) 0:00:01.801 ****** 2025-09-29 05:46:32.088853 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-29 05:46:32.088861 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-29 05:46:32.088870 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-29 05:46:32.088878 | orchestrator | 2025-09-29 05:46:32.088887 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-29 05:46:32.088896 | orchestrator | Monday 29 September 2025 05:44:43 +0000 (0:00:01.178) 0:00:02.979 ****** 2025-09-29 05:46:32.088904 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-29 05:46:32.088912 | orchestrator | 2025-09-29 05:46:32.088921 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-29 05:46:32.088929 | orchestrator | Monday 29 September 2025 05:44:45 +0000 (0:00:01.123) 0:00:04.103 ****** 2025-09-29 05:46:32.088938 | orchestrator | ok: [testbed-manager] 2025-09-29 05:46:32.088946 | orchestrator | 2025-09-29 05:46:32.088955 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-29 05:46:32.088963 | orchestrator | Monday 29 September 2025 05:44:45 +0000 (0:00:00.415) 0:00:04.519 ****** 2025-09-29 05:46:32.088972 | orchestrator | changed: [testbed-manager] 2025-09-29 05:46:32.088980 | orchestrator | 2025-09-29 05:46:32.088989 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-29 05:46:32.088997 | orchestrator | Monday 29 September 2025 05:44:46 +0000 (0:00:00.970) 0:00:05.489 ****** 2025-09-29 05:46:32.089005 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-29 05:46:32.089015 | orchestrator | ok: [testbed-manager] 2025-09-29 05:46:32.089023 | orchestrator | 2025-09-29 05:46:32.089032 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-29 05:46:32.089040 | orchestrator | Monday 29 September 2025 05:45:17 +0000 (0:00:31.440) 0:00:36.929 ****** 2025-09-29 05:46:32.089049 | orchestrator | changed: [testbed-manager] 2025-09-29 05:46:32.089057 | orchestrator | 2025-09-29 05:46:32.089066 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-29 05:46:32.089074 | orchestrator | Monday 29 September 2025 05:45:31 +0000 (0:00:13.107) 0:00:50.037 ****** 2025-09-29 05:46:32.089083 | orchestrator | Pausing for 60 seconds 2025-09-29 05:46:32.089092 | orchestrator | changed: [testbed-manager] 2025-09-29 05:46:32.089100 | orchestrator | 2025-09-29 05:46:32.089109 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-29 05:46:32.089118 | orchestrator | Monday 29 September 2025 05:46:31 +0000 (0:01:00.067) 0:01:50.104 ****** 2025-09-29 05:46:32.089126 | orchestrator | ok: [testbed-manager] 2025-09-29 05:46:32.089135 | orchestrator | 2025-09-29 05:46:32.089144 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-29 05:46:32.089154 | orchestrator | Monday 29 September 2025 05:46:31 +0000 (0:00:00.072) 0:01:50.176 ****** 2025-09-29 05:46:32.089165 | orchestrator | changed: [testbed-manager] 2025-09-29 05:46:32.089175 | orchestrator | 2025-09-29 05:46:32.089184 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:46:32.089194 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:46:32.089204 | orchestrator | 2025-09-29 05:46:32.089214 | orchestrator | 2025-09-29 05:46:32.089230 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:46:32.089240 | orchestrator | Monday 29 September 2025 05:46:31 +0000 (0:00:00.655) 0:01:50.832 ****** 2025-09-29 05:46:32.089250 | orchestrator | =============================================================================== 2025-09-29 05:46:32.089260 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-29 05:46:32.089270 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.44s 2025-09-29 05:46:32.089281 | orchestrator | osism.services.squid : Restart squid service --------------------------- 13.11s 2025-09-29 05:46:32.089291 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.51s 2025-09-29 05:46:32.089301 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-09-29 05:46:32.089311 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-09-29 05:46:32.089321 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-09-29 05:46:32.089331 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-09-29 05:46:32.089341 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.42s 2025-09-29 05:46:32.089351 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.12s 2025-09-29 05:46:32.089360 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-09-29 05:46:32.385787 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 05:46:32.386301 | orchestrator | ++ semver latest 9.0.0 2025-09-29 05:46:32.444107 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-29 05:46:32.444167 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 05:46:32.444888 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-29 05:46:44.417458 | orchestrator | 2025-09-29 05:46:44 | INFO  | Task 10c72ee5-480c-4e7d-a903-411ff0be1388 (operator) was prepared for execution. 2025-09-29 05:46:44.417570 | orchestrator | 2025-09-29 05:46:44 | INFO  | It takes a moment until task 10c72ee5-480c-4e7d-a903-411ff0be1388 (operator) has been started and output is visible here. 2025-09-29 05:46:59.531712 | orchestrator | 2025-09-29 05:46:59.531830 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-29 05:46:59.531847 | orchestrator | 2025-09-29 05:46:59.531859 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 05:46:59.531870 | orchestrator | Monday 29 September 2025 05:46:48 +0000 (0:00:00.130) 0:00:00.130 ****** 2025-09-29 05:46:59.531882 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:46:59.531894 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:46:59.531905 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:46:59.531916 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:46:59.531926 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:46:59.531937 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:46:59.531947 | orchestrator | 2025-09-29 05:46:59.531958 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-29 05:46:59.531969 | orchestrator | Monday 29 September 2025 05:46:51 +0000 (0:00:03.077) 0:00:03.208 ****** 2025-09-29 05:46:59.531980 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:46:59.531993 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:46:59.532006 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:46:59.532018 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:46:59.532030 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:46:59.532042 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:46:59.532055 | orchestrator | 2025-09-29 05:46:59.532072 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-29 05:46:59.532085 | orchestrator | 2025-09-29 05:46:59.532097 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-29 05:46:59.532110 | orchestrator | Monday 29 September 2025 05:46:51 +0000 (0:00:00.692) 0:00:03.900 ****** 2025-09-29 05:46:59.532122 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:46:59.532134 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:46:59.532147 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:46:59.532188 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:46:59.532200 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:46:59.532212 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:46:59.532225 | orchestrator | 2025-09-29 05:46:59.532237 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-29 05:46:59.532250 | orchestrator | Monday 29 September 2025 05:46:52 +0000 (0:00:00.157) 0:00:04.058 ****** 2025-09-29 05:46:59.532262 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:46:59.532274 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:46:59.532286 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:46:59.532299 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:46:59.532311 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:46:59.532323 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:46:59.532336 | orchestrator | 2025-09-29 05:46:59.532364 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-29 05:46:59.532375 | orchestrator | Monday 29 September 2025 05:46:52 +0000 (0:00:00.131) 0:00:04.190 ****** 2025-09-29 05:46:59.532386 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:46:59.532398 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:46:59.532408 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:46:59.532419 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:46:59.532457 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:46:59.532470 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:46:59.532481 | orchestrator | 2025-09-29 05:46:59.532492 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-29 05:46:59.532503 | orchestrator | Monday 29 September 2025 05:46:52 +0000 (0:00:00.632) 0:00:04.822 ****** 2025-09-29 05:46:59.532514 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:46:59.532524 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:46:59.532535 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:46:59.532546 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:46:59.532556 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:46:59.532567 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:46:59.532577 | orchestrator | 2025-09-29 05:46:59.532588 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-29 05:46:59.532599 | orchestrator | Monday 29 September 2025 05:46:53 +0000 (0:00:00.769) 0:00:05.591 ****** 2025-09-29 05:46:59.532610 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-29 05:46:59.532621 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-29 05:46:59.532632 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-29 05:46:59.532642 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-29 05:46:59.532653 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-29 05:46:59.532664 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-29 05:46:59.532674 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-29 05:46:59.532685 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-29 05:46:59.532696 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-29 05:46:59.532707 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-29 05:46:59.532717 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-29 05:46:59.532728 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-29 05:46:59.532739 | orchestrator | 2025-09-29 05:46:59.532749 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-29 05:46:59.532760 | orchestrator | Monday 29 September 2025 05:46:54 +0000 (0:00:01.126) 0:00:06.718 ****** 2025-09-29 05:46:59.532771 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:46:59.532781 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:46:59.532792 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:46:59.532802 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:46:59.532813 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:46:59.532824 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:46:59.532834 | orchestrator | 2025-09-29 05:46:59.532845 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-29 05:46:59.532865 | orchestrator | Monday 29 September 2025 05:46:56 +0000 (0:00:01.268) 0:00:07.987 ****** 2025-09-29 05:46:59.532876 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-29 05:46:59.532887 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-29 05:46:59.532897 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-29 05:46:59.532908 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532940 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532952 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532962 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532973 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532984 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-29 05:46:59.532994 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533005 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533015 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533026 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533036 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533047 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-29 05:46:59.533057 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533068 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533078 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533089 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533099 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533110 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-29 05:46:59.533121 | orchestrator | 2025-09-29 05:46:59.533131 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-29 05:46:59.533143 | orchestrator | Monday 29 September 2025 05:46:57 +0000 (0:00:01.362) 0:00:09.349 ****** 2025-09-29 05:46:59.533154 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:46:59.533165 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:46:59.533175 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:46:59.533186 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:46:59.533197 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:46:59.533207 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:46:59.533218 | orchestrator | 2025-09-29 05:46:59.533228 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-29 05:46:59.533239 | orchestrator | Monday 29 September 2025 05:46:57 +0000 (0:00:00.147) 0:00:09.497 ****** 2025-09-29 05:46:59.533249 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:46:59.533260 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:46:59.533270 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:46:59.533281 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:46:59.533291 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:46:59.533301 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:46:59.533312 | orchestrator | 2025-09-29 05:46:59.533323 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-29 05:46:59.533333 | orchestrator | Monday 29 September 2025 05:46:58 +0000 (0:00:00.593) 0:00:10.091 ****** 2025-09-29 05:46:59.533344 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:46:59.533355 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:46:59.533365 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:46:59.533376 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:46:59.533393 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:46:59.533403 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:46:59.533414 | orchestrator | 2025-09-29 05:46:59.533425 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-29 05:46:59.533453 | orchestrator | Monday 29 September 2025 05:46:58 +0000 (0:00:00.172) 0:00:10.263 ****** 2025-09-29 05:46:59.533465 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 05:46:59.533475 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:46:59.533486 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-29 05:46:59.533496 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:46:59.533507 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 05:46:59.533517 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 05:46:59.533528 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-29 05:46:59.533538 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:46:59.533549 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:46:59.533559 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:46:59.533570 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 05:46:59.533580 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:46:59.533590 | orchestrator | 2025-09-29 05:46:59.533601 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-29 05:46:59.533612 | orchestrator | Monday 29 September 2025 05:46:59 +0000 (0:00:00.725) 0:00:10.989 ****** 2025-09-29 05:46:59.533623 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:46:59.533633 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:46:59.533644 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:46:59.533654 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:46:59.533665 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:46:59.533675 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:46:59.533686 | orchestrator | 2025-09-29 05:46:59.533696 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-29 05:46:59.533707 | orchestrator | Monday 29 September 2025 05:46:59 +0000 (0:00:00.153) 0:00:11.143 ****** 2025-09-29 05:46:59.533718 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:46:59.533728 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:46:59.533739 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:46:59.533749 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:46:59.533760 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:46:59.533770 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:46:59.533781 | orchestrator | 2025-09-29 05:46:59.533791 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-29 05:46:59.533802 | orchestrator | Monday 29 September 2025 05:46:59 +0000 (0:00:00.177) 0:00:11.320 ****** 2025-09-29 05:46:59.533812 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:46:59.533823 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:46:59.533834 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:46:59.533844 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:46:59.533863 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:47:00.601657 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:47:00.601785 | orchestrator | 2025-09-29 05:47:00.601811 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-29 05:47:00.601832 | orchestrator | Monday 29 September 2025 05:46:59 +0000 (0:00:00.145) 0:00:11.466 ****** 2025-09-29 05:47:00.601852 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:47:00.601866 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:47:00.601877 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:00.601887 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:47:00.601898 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:00.601909 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:00.601920 | orchestrator | 2025-09-29 05:47:00.601932 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-29 05:47:00.601943 | orchestrator | Monday 29 September 2025 05:47:00 +0000 (0:00:00.619) 0:00:12.086 ****** 2025-09-29 05:47:00.601984 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:47:00.601996 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:47:00.602006 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:47:00.602017 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:47:00.602126 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:47:00.602138 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:47:00.602149 | orchestrator | 2025-09-29 05:47:00.602160 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:47:00.602175 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602190 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602203 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602216 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602247 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602260 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:47:00.602272 | orchestrator | 2025-09-29 05:47:00.602285 | orchestrator | 2025-09-29 05:47:00.602306 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:47:00.602320 | orchestrator | Monday 29 September 2025 05:47:00 +0000 (0:00:00.209) 0:00:12.295 ****** 2025-09-29 05:47:00.602332 | orchestrator | =============================================================================== 2025-09-29 05:47:00.602345 | orchestrator | Gathering Facts --------------------------------------------------------- 3.08s 2025-09-29 05:47:00.602358 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.36s 2025-09-29 05:47:00.602373 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2025-09-29 05:47:00.602385 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.13s 2025-09-29 05:47:00.602398 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.77s 2025-09-29 05:47:00.602411 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2025-09-29 05:47:00.602423 | orchestrator | Do not require tty for all users ---------------------------------------- 0.69s 2025-09-29 05:47:00.602467 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2025-09-29 05:47:00.602481 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.62s 2025-09-29 05:47:00.602494 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-09-29 05:47:00.602507 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-09-29 05:47:00.602520 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2025-09-29 05:47:00.602531 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-29 05:47:00.602542 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-29 05:47:00.602553 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-09-29 05:47:00.602563 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-09-29 05:47:00.602574 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-29 05:47:00.602585 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2025-09-29 05:47:00.895306 | orchestrator | + osism apply --environment custom facts 2025-09-29 05:47:02.780093 | orchestrator | 2025-09-29 05:47:02 | INFO  | Trying to run play facts in environment custom 2025-09-29 05:47:12.953812 | orchestrator | 2025-09-29 05:47:12 | INFO  | Task 526f75c8-2498-4bdb-9ae5-23c7ebb92640 (facts) was prepared for execution. 2025-09-29 05:47:12.953930 | orchestrator | 2025-09-29 05:47:12 | INFO  | It takes a moment until task 526f75c8-2498-4bdb-9ae5-23c7ebb92640 (facts) has been started and output is visible here. 2025-09-29 05:47:54.610837 | orchestrator | 2025-09-29 05:47:54.610950 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-29 05:47:54.610965 | orchestrator | 2025-09-29 05:47:54.610976 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-29 05:47:54.610987 | orchestrator | Monday 29 September 2025 05:47:16 +0000 (0:00:00.089) 0:00:00.089 ****** 2025-09-29 05:47:54.610997 | orchestrator | ok: [testbed-manager] 2025-09-29 05:47:54.611008 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:47:54.611018 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:47:54.611028 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.611038 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:47:54.611048 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.611058 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.611067 | orchestrator | 2025-09-29 05:47:54.611077 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-29 05:47:54.611087 | orchestrator | Monday 29 September 2025 05:47:17 +0000 (0:00:01.455) 0:00:01.545 ****** 2025-09-29 05:47:54.611097 | orchestrator | ok: [testbed-manager] 2025-09-29 05:47:54.611106 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.611116 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:47:54.611126 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:47:54.611135 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:47:54.611145 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.611154 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.611164 | orchestrator | 2025-09-29 05:47:54.611174 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-29 05:47:54.611183 | orchestrator | 2025-09-29 05:47:54.611193 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-29 05:47:54.611203 | orchestrator | Monday 29 September 2025 05:47:19 +0000 (0:00:01.239) 0:00:02.785 ****** 2025-09-29 05:47:54.611212 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.611222 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.611232 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.611241 | orchestrator | 2025-09-29 05:47:54.611251 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-29 05:47:54.611262 | orchestrator | Monday 29 September 2025 05:47:19 +0000 (0:00:00.121) 0:00:02.907 ****** 2025-09-29 05:47:54.611271 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.611281 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.611291 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.611300 | orchestrator | 2025-09-29 05:47:54.611310 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-29 05:47:54.611320 | orchestrator | Monday 29 September 2025 05:47:19 +0000 (0:00:00.220) 0:00:03.127 ****** 2025-09-29 05:47:54.611329 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.611339 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.611349 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.611358 | orchestrator | 2025-09-29 05:47:54.611368 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-29 05:47:54.611395 | orchestrator | Monday 29 September 2025 05:47:19 +0000 (0:00:00.193) 0:00:03.320 ****** 2025-09-29 05:47:54.611408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:47:54.611420 | orchestrator | 2025-09-29 05:47:54.611432 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-29 05:47:54.611466 | orchestrator | Monday 29 September 2025 05:47:19 +0000 (0:00:00.148) 0:00:03.469 ****** 2025-09-29 05:47:54.611478 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.611489 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.611500 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.611511 | orchestrator | 2025-09-29 05:47:54.611584 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-29 05:47:54.611604 | orchestrator | Monday 29 September 2025 05:47:20 +0000 (0:00:00.442) 0:00:03.912 ****** 2025-09-29 05:47:54.611620 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:47:54.611634 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:47:54.611646 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:47:54.611657 | orchestrator | 2025-09-29 05:47:54.611667 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-29 05:47:54.611679 | orchestrator | Monday 29 September 2025 05:47:20 +0000 (0:00:00.106) 0:00:04.019 ****** 2025-09-29 05:47:54.611690 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.611701 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.611712 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.611722 | orchestrator | 2025-09-29 05:47:54.611733 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-29 05:47:54.611743 | orchestrator | Monday 29 September 2025 05:47:21 +0000 (0:00:01.033) 0:00:05.052 ****** 2025-09-29 05:47:54.611752 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.611761 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.611771 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.611780 | orchestrator | 2025-09-29 05:47:54.611790 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-29 05:47:54.611800 | orchestrator | Monday 29 September 2025 05:47:21 +0000 (0:00:00.466) 0:00:05.519 ****** 2025-09-29 05:47:54.611809 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.611819 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.611828 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.611837 | orchestrator | 2025-09-29 05:47:54.611847 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-29 05:47:54.611857 | orchestrator | Monday 29 September 2025 05:47:23 +0000 (0:00:01.031) 0:00:06.550 ****** 2025-09-29 05:47:54.611866 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.611875 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.611885 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.611894 | orchestrator | 2025-09-29 05:47:54.611904 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-29 05:47:54.611913 | orchestrator | Monday 29 September 2025 05:47:39 +0000 (0:00:16.229) 0:00:22.780 ****** 2025-09-29 05:47:54.611923 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:47:54.611932 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:47:54.611942 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:47:54.611951 | orchestrator | 2025-09-29 05:47:54.611960 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-29 05:47:54.611987 | orchestrator | Monday 29 September 2025 05:47:39 +0000 (0:00:00.141) 0:00:22.921 ****** 2025-09-29 05:47:54.611997 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:47:54.612007 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:47:54.612016 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:47:54.612026 | orchestrator | 2025-09-29 05:47:54.612036 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-29 05:47:54.612045 | orchestrator | Monday 29 September 2025 05:47:45 +0000 (0:00:06.388) 0:00:29.310 ****** 2025-09-29 05:47:54.612055 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.612065 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.612074 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.612083 | orchestrator | 2025-09-29 05:47:54.612093 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-29 05:47:54.612103 | orchestrator | Monday 29 September 2025 05:47:46 +0000 (0:00:00.410) 0:00:29.720 ****** 2025-09-29 05:47:54.612121 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-29 05:47:54.612131 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-29 05:47:54.612140 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-29 05:47:54.612150 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-29 05:47:54.612159 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-29 05:47:54.612168 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-29 05:47:54.612178 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-29 05:47:54.612187 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-29 05:47:54.612197 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-29 05:47:54.612206 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-29 05:47:54.612216 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-29 05:47:54.612225 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-29 05:47:54.612235 | orchestrator | 2025-09-29 05:47:54.612244 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-29 05:47:54.612254 | orchestrator | Monday 29 September 2025 05:47:49 +0000 (0:00:03.419) 0:00:33.140 ****** 2025-09-29 05:47:54.612263 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.612273 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.612282 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.612292 | orchestrator | 2025-09-29 05:47:54.612301 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 05:47:54.612311 | orchestrator | 2025-09-29 05:47:54.612321 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 05:47:54.612330 | orchestrator | Monday 29 September 2025 05:47:50 +0000 (0:00:01.193) 0:00:34.334 ****** 2025-09-29 05:47:54.612340 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:47:54.612350 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:47:54.612359 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:47:54.612369 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:47:54.612378 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:47:54.612388 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:47:54.612397 | orchestrator | ok: [testbed-manager] 2025-09-29 05:47:54.612406 | orchestrator | 2025-09-29 05:47:54.612416 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:47:54.612426 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:47:54.612437 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:47:54.612448 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:47:54.612494 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:47:54.612505 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:47:54.612534 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:47:54.612549 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:47:54.612559 | orchestrator | 2025-09-29 05:47:54.612569 | orchestrator | 2025-09-29 05:47:54.612579 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:47:54.612596 | orchestrator | Monday 29 September 2025 05:47:54 +0000 (0:00:03.795) 0:00:38.129 ****** 2025-09-29 05:47:54.612605 | orchestrator | =============================================================================== 2025-09-29 05:47:54.612615 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.23s 2025-09-29 05:47:54.612624 | orchestrator | Install required packages (Debian) -------------------------------------- 6.39s 2025-09-29 05:47:54.612634 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.80s 2025-09-29 05:47:54.612644 | orchestrator | Copy fact files --------------------------------------------------------- 3.42s 2025-09-29 05:47:54.612653 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2025-09-29 05:47:54.612663 | orchestrator | Copy fact file ---------------------------------------------------------- 1.24s 2025-09-29 05:47:54.612678 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.19s 2025-09-29 05:47:54.786823 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-09-29 05:47:54.786895 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.03s 2025-09-29 05:47:54.786902 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-09-29 05:47:54.786907 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-09-29 05:47:54.786912 | orchestrator | Create custom facts directory ------------------------------------------- 0.41s 2025-09-29 05:47:54.786920 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-09-29 05:47:54.786929 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-29 05:47:54.786937 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-09-29 05:47:54.786946 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.14s 2025-09-29 05:47:54.786953 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-29 05:47:54.786961 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-09-29 05:47:54.970173 | orchestrator | + osism apply bootstrap 2025-09-29 05:48:06.927081 | orchestrator | 2025-09-29 05:48:06 | INFO  | Task a158ab0c-b267-4f2b-92d8-5d54e903d5df (bootstrap) was prepared for execution. 2025-09-29 05:48:06.927198 | orchestrator | 2025-09-29 05:48:06 | INFO  | It takes a moment until task a158ab0c-b267-4f2b-92d8-5d54e903d5df (bootstrap) has been started and output is visible here. 2025-09-29 05:48:22.752371 | orchestrator | 2025-09-29 05:48:22.752482 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-29 05:48:22.752498 | orchestrator | 2025-09-29 05:48:22.752510 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-29 05:48:22.752522 | orchestrator | Monday 29 September 2025 05:48:10 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-29 05:48:22.752533 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:22.752545 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:22.752583 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:22.752596 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:22.752606 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:22.752617 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:22.752628 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:22.752639 | orchestrator | 2025-09-29 05:48:22.752650 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 05:48:22.752661 | orchestrator | 2025-09-29 05:48:22.752680 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 05:48:22.752692 | orchestrator | Monday 29 September 2025 05:48:11 +0000 (0:00:00.220) 0:00:00.368 ****** 2025-09-29 05:48:22.752703 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:22.752714 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:22.752724 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:22.752735 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:22.752771 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:22.752791 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:22.752812 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:22.752830 | orchestrator | 2025-09-29 05:48:22.752849 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-29 05:48:22.752864 | orchestrator | 2025-09-29 05:48:22.752875 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 05:48:22.752886 | orchestrator | Monday 29 September 2025 05:48:14 +0000 (0:00:03.748) 0:00:04.117 ****** 2025-09-29 05:48:22.752897 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-29 05:48:22.752908 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-29 05:48:22.752919 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-29 05:48:22.752930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-29 05:48:22.752941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-29 05:48:22.752951 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-29 05:48:22.752962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 05:48:22.752973 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-29 05:48:22.752983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-29 05:48:22.752994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-29 05:48:22.753005 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-29 05:48:22.753016 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-29 05:48:22.753027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 05:48:22.753038 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-29 05:48:22.753048 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-29 05:48:22.753059 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-29 05:48:22.753070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-29 05:48:22.753080 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:22.753091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 05:48:22.753102 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-29 05:48:22.753112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-29 05:48:22.753123 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-29 05:48:22.753133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-29 05:48:22.753144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-29 05:48:22.753155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-29 05:48:22.753165 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-29 05:48:22.753176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-29 05:48:22.753186 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-29 05:48:22.753197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-29 05:48:22.753207 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-29 05:48:22.753218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-29 05:48:22.753229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-29 05:48:22.753239 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-29 05:48:22.753250 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-29 05:48:22.753260 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-29 05:48:22.753271 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:22.753281 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:22.753292 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-29 05:48:22.753302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-29 05:48:22.753319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-29 05:48:22.753330 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-29 05:48:22.753340 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:22.753351 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-29 05:48:22.753361 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 05:48:22.753372 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-29 05:48:22.753383 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 05:48:22.753410 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-29 05:48:22.753422 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-29 05:48:22.753433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 05:48:22.753443 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-29 05:48:22.753454 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:22.753465 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-29 05:48:22.753475 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-29 05:48:22.753486 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:22.753497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-29 05:48:22.753507 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:22.753518 | orchestrator | 2025-09-29 05:48:22.753529 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-29 05:48:22.753540 | orchestrator | 2025-09-29 05:48:22.753551 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-29 05:48:22.753583 | orchestrator | Monday 29 September 2025 05:48:15 +0000 (0:00:00.371) 0:00:04.489 ****** 2025-09-29 05:48:22.753594 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:22.753605 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:22.753616 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:22.753627 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:22.753637 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:22.753648 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:22.753659 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:22.753669 | orchestrator | 2025-09-29 05:48:22.753680 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-29 05:48:22.753691 | orchestrator | Monday 29 September 2025 05:48:16 +0000 (0:00:01.173) 0:00:05.662 ****** 2025-09-29 05:48:22.753702 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:22.753713 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:22.753723 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:22.753734 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:22.753745 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:22.753755 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:22.753766 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:22.753777 | orchestrator | 2025-09-29 05:48:22.753788 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-29 05:48:22.753798 | orchestrator | Monday 29 September 2025 05:48:18 +0000 (0:00:01.856) 0:00:07.519 ****** 2025-09-29 05:48:22.753810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:22.753823 | orchestrator | 2025-09-29 05:48:22.753834 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-29 05:48:22.753845 | orchestrator | Monday 29 September 2025 05:48:18 +0000 (0:00:00.231) 0:00:07.750 ****** 2025-09-29 05:48:22.753856 | orchestrator | changed: [testbed-manager] 2025-09-29 05:48:22.753867 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:22.753878 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:22.753889 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:22.753900 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:22.753916 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:22.753927 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:22.753938 | orchestrator | 2025-09-29 05:48:22.753949 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-29 05:48:22.753960 | orchestrator | Monday 29 September 2025 05:48:20 +0000 (0:00:01.841) 0:00:09.592 ****** 2025-09-29 05:48:22.753971 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:22.753983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:22.753996 | orchestrator | 2025-09-29 05:48:22.754007 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-29 05:48:22.754090 | orchestrator | Monday 29 September 2025 05:48:20 +0000 (0:00:00.262) 0:00:09.854 ****** 2025-09-29 05:48:22.754102 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:22.754113 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:22.754123 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:22.754165 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:22.754177 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:22.754188 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:22.754199 | orchestrator | 2025-09-29 05:48:22.754209 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-29 05:48:22.754220 | orchestrator | Monday 29 September 2025 05:48:21 +0000 (0:00:00.990) 0:00:10.845 ****** 2025-09-29 05:48:22.754231 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:22.754242 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:22.754252 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:22.754263 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:22.754273 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:22.754284 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:22.754295 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:22.754305 | orchestrator | 2025-09-29 05:48:22.754317 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-29 05:48:22.754328 | orchestrator | Monday 29 September 2025 05:48:22 +0000 (0:00:00.682) 0:00:11.528 ****** 2025-09-29 05:48:22.754338 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:22.754349 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:22.754360 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:22.754379 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:22.754390 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:22.754400 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:22.754411 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:22.754422 | orchestrator | 2025-09-29 05:48:22.754432 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-29 05:48:22.754444 | orchestrator | Monday 29 September 2025 05:48:22 +0000 (0:00:00.435) 0:00:11.963 ****** 2025-09-29 05:48:22.754455 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:22.754466 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:22.754485 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:33.592791 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:33.592901 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:33.592918 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:33.592937 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:33.592950 | orchestrator | 2025-09-29 05:48:33.592964 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-29 05:48:33.592976 | orchestrator | Monday 29 September 2025 05:48:22 +0000 (0:00:00.200) 0:00:12.163 ****** 2025-09-29 05:48:33.592988 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:33.593014 | orchestrator | 2025-09-29 05:48:33.593055 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-29 05:48:33.593068 | orchestrator | Monday 29 September 2025 05:48:23 +0000 (0:00:00.280) 0:00:12.444 ****** 2025-09-29 05:48:33.593080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:33.593091 | orchestrator | 2025-09-29 05:48:33.593101 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-29 05:48:33.593112 | orchestrator | Monday 29 September 2025 05:48:23 +0000 (0:00:00.290) 0:00:12.734 ****** 2025-09-29 05:48:33.593123 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.593135 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.593146 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.593166 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.593185 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.593204 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.593224 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.593244 | orchestrator | 2025-09-29 05:48:33.593262 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-29 05:48:33.593273 | orchestrator | Monday 29 September 2025 05:48:24 +0000 (0:00:01.373) 0:00:14.108 ****** 2025-09-29 05:48:33.593284 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:33.593295 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:33.593305 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:33.593318 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:33.593330 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:33.593343 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:33.593362 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:33.593381 | orchestrator | 2025-09-29 05:48:33.593401 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-29 05:48:33.593419 | orchestrator | Monday 29 September 2025 05:48:24 +0000 (0:00:00.163) 0:00:14.271 ****** 2025-09-29 05:48:33.593439 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.593460 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.593479 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.593498 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.593511 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.593523 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.593535 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.593548 | orchestrator | 2025-09-29 05:48:33.593561 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-29 05:48:33.593594 | orchestrator | Monday 29 September 2025 05:48:25 +0000 (0:00:00.486) 0:00:14.758 ****** 2025-09-29 05:48:33.593607 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:33.593620 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:33.593634 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:33.593646 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:33.593660 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:33.593672 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:33.593682 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:33.593693 | orchestrator | 2025-09-29 05:48:33.593704 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-29 05:48:33.593715 | orchestrator | Monday 29 September 2025 05:48:25 +0000 (0:00:00.251) 0:00:15.010 ****** 2025-09-29 05:48:33.593726 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.593736 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:33.593747 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:33.593757 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:33.593768 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:33.593778 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:33.593789 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:33.593799 | orchestrator | 2025-09-29 05:48:33.593810 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-29 05:48:33.593832 | orchestrator | Monday 29 September 2025 05:48:26 +0000 (0:00:00.515) 0:00:15.525 ****** 2025-09-29 05:48:33.593843 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.593853 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:33.593864 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:33.593874 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:33.593885 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:33.593895 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:33.593906 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:33.593917 | orchestrator | 2025-09-29 05:48:33.593927 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-29 05:48:33.593938 | orchestrator | Monday 29 September 2025 05:48:27 +0000 (0:00:01.077) 0:00:16.603 ****** 2025-09-29 05:48:33.593949 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.593959 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.593970 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.593980 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.593991 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594001 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594058 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.594071 | orchestrator | 2025-09-29 05:48:33.594082 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-29 05:48:33.594092 | orchestrator | Monday 29 September 2025 05:48:28 +0000 (0:00:01.054) 0:00:17.658 ****** 2025-09-29 05:48:33.594122 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:33.594134 | orchestrator | 2025-09-29 05:48:33.594144 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-29 05:48:33.594155 | orchestrator | Monday 29 September 2025 05:48:28 +0000 (0:00:00.242) 0:00:17.900 ****** 2025-09-29 05:48:33.594166 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:33.594177 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:33.594188 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:48:33.594198 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:48:33.594209 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:33.594220 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:48:33.594236 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:33.594247 | orchestrator | 2025-09-29 05:48:33.594258 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-29 05:48:33.594269 | orchestrator | Monday 29 September 2025 05:48:29 +0000 (0:00:01.171) 0:00:19.072 ****** 2025-09-29 05:48:33.594280 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.594290 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.594301 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594312 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594322 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.594333 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.594343 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.594353 | orchestrator | 2025-09-29 05:48:33.594364 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-29 05:48:33.594375 | orchestrator | Monday 29 September 2025 05:48:29 +0000 (0:00:00.224) 0:00:19.296 ****** 2025-09-29 05:48:33.594386 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.594396 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.594407 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594417 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594428 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.594438 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.594448 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.594459 | orchestrator | 2025-09-29 05:48:33.594470 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-29 05:48:33.594480 | orchestrator | Monday 29 September 2025 05:48:30 +0000 (0:00:00.179) 0:00:19.476 ****** 2025-09-29 05:48:33.594498 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.594509 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.594520 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594530 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594540 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.594551 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.594561 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.594571 | orchestrator | 2025-09-29 05:48:33.594599 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-29 05:48:33.594610 | orchestrator | Monday 29 September 2025 05:48:30 +0000 (0:00:00.179) 0:00:19.656 ****** 2025-09-29 05:48:33.594621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:48:33.594634 | orchestrator | 2025-09-29 05:48:33.594645 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-29 05:48:33.594656 | orchestrator | Monday 29 September 2025 05:48:30 +0000 (0:00:00.260) 0:00:19.916 ****** 2025-09-29 05:48:33.594667 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.594678 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.594688 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594698 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594709 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.594719 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.594730 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.594740 | orchestrator | 2025-09-29 05:48:33.594751 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-29 05:48:33.594762 | orchestrator | Monday 29 September 2025 05:48:31 +0000 (0:00:00.488) 0:00:20.404 ****** 2025-09-29 05:48:33.594772 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:48:33.594784 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:48:33.594804 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:48:33.594817 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:48:33.594828 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:48:33.594838 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:48:33.594849 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:48:33.594859 | orchestrator | 2025-09-29 05:48:33.594871 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-29 05:48:33.594881 | orchestrator | Monday 29 September 2025 05:48:31 +0000 (0:00:00.209) 0:00:20.614 ****** 2025-09-29 05:48:33.594892 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.594903 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.594913 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.594924 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.594935 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:48:33.594945 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:48:33.594956 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:48:33.594966 | orchestrator | 2025-09-29 05:48:33.594977 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-29 05:48:33.594988 | orchestrator | Monday 29 September 2025 05:48:32 +0000 (0:00:00.884) 0:00:21.498 ****** 2025-09-29 05:48:33.594998 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.595009 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.595020 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.595030 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.595040 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:48:33.595051 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:48:33.595062 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:48:33.595072 | orchestrator | 2025-09-29 05:48:33.595083 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-29 05:48:33.595093 | orchestrator | Monday 29 September 2025 05:48:32 +0000 (0:00:00.478) 0:00:21.977 ****** 2025-09-29 05:48:33.595104 | orchestrator | ok: [testbed-manager] 2025-09-29 05:48:33.595122 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:48:33.595141 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:48:33.595153 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:48:33.595171 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.186331 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.186435 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.186449 | orchestrator | 2025-09-29 05:49:12.186461 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-29 05:49:12.186473 | orchestrator | Monday 29 September 2025 05:48:33 +0000 (0:00:00.946) 0:00:22.924 ****** 2025-09-29 05:49:12.186483 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.186494 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.186504 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.186513 | orchestrator | changed: [testbed-manager] 2025-09-29 05:49:12.186523 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.186533 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.186542 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.186552 | orchestrator | 2025-09-29 05:49:12.186562 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-29 05:49:12.186572 | orchestrator | Monday 29 September 2025 05:48:50 +0000 (0:00:17.148) 0:00:40.072 ****** 2025-09-29 05:49:12.186582 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.186591 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.186601 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.186610 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.186620 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.186688 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.186701 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.186711 | orchestrator | 2025-09-29 05:49:12.186721 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-29 05:49:12.186731 | orchestrator | Monday 29 September 2025 05:48:50 +0000 (0:00:00.219) 0:00:40.291 ****** 2025-09-29 05:49:12.186740 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.186750 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.186760 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.186769 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.186778 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.186788 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.186797 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.186807 | orchestrator | 2025-09-29 05:49:12.186816 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-29 05:49:12.186826 | orchestrator | Monday 29 September 2025 05:48:51 +0000 (0:00:00.242) 0:00:40.534 ****** 2025-09-29 05:49:12.186836 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.186845 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.186855 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.186866 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.186878 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.186889 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.186900 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.186915 | orchestrator | 2025-09-29 05:49:12.186932 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-29 05:49:12.186949 | orchestrator | Monday 29 September 2025 05:48:51 +0000 (0:00:00.223) 0:00:40.758 ****** 2025-09-29 05:49:12.186968 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:49:12.186986 | orchestrator | 2025-09-29 05:49:12.187003 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-29 05:49:12.187020 | orchestrator | Monday 29 September 2025 05:48:51 +0000 (0:00:00.313) 0:00:41.071 ****** 2025-09-29 05:49:12.187035 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.187052 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.187071 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.187117 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.187129 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.187140 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.187151 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.187163 | orchestrator | 2025-09-29 05:49:12.187174 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-29 05:49:12.187185 | orchestrator | Monday 29 September 2025 05:48:53 +0000 (0:00:01.367) 0:00:42.438 ****** 2025-09-29 05:49:12.187196 | orchestrator | changed: [testbed-manager] 2025-09-29 05:49:12.187207 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:49:12.187219 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:49:12.187228 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:49:12.187237 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.187246 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.187256 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.187265 | orchestrator | 2025-09-29 05:49:12.187275 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-29 05:49:12.187301 | orchestrator | Monday 29 September 2025 05:48:54 +0000 (0:00:01.133) 0:00:43.572 ****** 2025-09-29 05:49:12.187311 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.187321 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.187330 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.187339 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.187349 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.187358 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.187367 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.187376 | orchestrator | 2025-09-29 05:49:12.187386 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-29 05:49:12.187396 | orchestrator | Monday 29 September 2025 05:48:55 +0000 (0:00:00.805) 0:00:44.377 ****** 2025-09-29 05:49:12.187406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:49:12.187417 | orchestrator | 2025-09-29 05:49:12.187427 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-29 05:49:12.187437 | orchestrator | Monday 29 September 2025 05:48:55 +0000 (0:00:00.324) 0:00:44.701 ****** 2025-09-29 05:49:12.187447 | orchestrator | changed: [testbed-manager] 2025-09-29 05:49:12.187456 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:49:12.187466 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:49:12.187475 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.187484 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:49:12.187494 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.187503 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.187512 | orchestrator | 2025-09-29 05:49:12.187539 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-29 05:49:12.187549 | orchestrator | Monday 29 September 2025 05:48:56 +0000 (0:00:01.034) 0:00:45.736 ****** 2025-09-29 05:49:12.187559 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:49:12.187568 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:49:12.187577 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:49:12.187587 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:49:12.187596 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:49:12.187606 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:49:12.187615 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:49:12.187624 | orchestrator | 2025-09-29 05:49:12.187665 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-29 05:49:12.187680 | orchestrator | Monday 29 September 2025 05:48:56 +0000 (0:00:00.358) 0:00:46.095 ****** 2025-09-29 05:49:12.187690 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:49:12.187699 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:49:12.187709 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.187718 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:49:12.187735 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.187745 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.187754 | orchestrator | changed: [testbed-manager] 2025-09-29 05:49:12.187764 | orchestrator | 2025-09-29 05:49:12.187773 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-29 05:49:12.187783 | orchestrator | Monday 29 September 2025 05:49:07 +0000 (0:00:10.858) 0:00:56.953 ****** 2025-09-29 05:49:12.187793 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.187802 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.187811 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.187821 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.187830 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.187840 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.187849 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.187859 | orchestrator | 2025-09-29 05:49:12.187868 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-29 05:49:12.187878 | orchestrator | Monday 29 September 2025 05:49:08 +0000 (0:00:01.023) 0:00:57.977 ****** 2025-09-29 05:49:12.187888 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.187897 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.187907 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.187916 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.187925 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.187935 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.187944 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.187954 | orchestrator | 2025-09-29 05:49:12.187963 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-29 05:49:12.187973 | orchestrator | Monday 29 September 2025 05:49:09 +0000 (0:00:00.861) 0:00:58.839 ****** 2025-09-29 05:49:12.187982 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.187992 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.188001 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.188010 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.188020 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.188029 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.188039 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.188049 | orchestrator | 2025-09-29 05:49:12.188066 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-29 05:49:12.188083 | orchestrator | Monday 29 September 2025 05:49:09 +0000 (0:00:00.142) 0:00:58.981 ****** 2025-09-29 05:49:12.188100 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.188115 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.188131 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.188147 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.188163 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.188180 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.188197 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.188213 | orchestrator | 2025-09-29 05:49:12.188223 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-29 05:49:12.188233 | orchestrator | Monday 29 September 2025 05:49:09 +0000 (0:00:00.151) 0:00:59.133 ****** 2025-09-29 05:49:12.188243 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:49:12.188253 | orchestrator | 2025-09-29 05:49:12.188262 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-29 05:49:12.188272 | orchestrator | Monday 29 September 2025 05:49:10 +0000 (0:00:00.227) 0:00:59.361 ****** 2025-09-29 05:49:12.188281 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.188291 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.188300 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.188310 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.188319 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.188328 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.188337 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.188354 | orchestrator | 2025-09-29 05:49:12.188364 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-29 05:49:12.188374 | orchestrator | Monday 29 September 2025 05:49:11 +0000 (0:00:01.380) 0:01:00.741 ****** 2025-09-29 05:49:12.188383 | orchestrator | changed: [testbed-manager] 2025-09-29 05:49:12.188393 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:49:12.188402 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:49:12.188411 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:49:12.188421 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:49:12.188430 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:49:12.188440 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:49:12.188449 | orchestrator | 2025-09-29 05:49:12.188459 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-29 05:49:12.188468 | orchestrator | Monday 29 September 2025 05:49:11 +0000 (0:00:00.537) 0:01:01.279 ****** 2025-09-29 05:49:12.188478 | orchestrator | ok: [testbed-manager] 2025-09-29 05:49:12.188488 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:49:12.188497 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:49:12.188506 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:49:12.188516 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:49:12.188525 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:49:12.188535 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:49:12.188544 | orchestrator | 2025-09-29 05:49:12.188561 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-29 05:51:21.316944 | orchestrator | Monday 29 September 2025 05:49:12 +0000 (0:00:00.241) 0:01:01.521 ****** 2025-09-29 05:51:21.317064 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:21.317082 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:21.317094 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:21.317105 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:21.317116 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:21.317127 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:21.317138 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:21.317149 | orchestrator | 2025-09-29 05:51:21.317162 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-29 05:51:21.317173 | orchestrator | Monday 29 September 2025 05:49:13 +0000 (0:00:01.081) 0:01:02.603 ****** 2025-09-29 05:51:21.317185 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:21.317196 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:21.317223 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:21.317234 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:21.317245 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:21.317255 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:21.317266 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:21.317277 | orchestrator | 2025-09-29 05:51:21.317289 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-29 05:51:21.317300 | orchestrator | Monday 29 September 2025 05:49:14 +0000 (0:00:01.423) 0:01:04.026 ****** 2025-09-29 05:51:21.317311 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:21.317322 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:21.317333 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:21.317344 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:21.317355 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:21.317365 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:21.317376 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:21.317387 | orchestrator | 2025-09-29 05:51:21.317398 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-29 05:51:21.317409 | orchestrator | Monday 29 September 2025 05:49:16 +0000 (0:00:02.059) 0:01:06.085 ****** 2025-09-29 05:51:21.317422 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:21.317434 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:21.317447 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:21.317460 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:21.317472 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:21.317484 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:21.317520 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:21.317532 | orchestrator | 2025-09-29 05:51:21.317545 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-29 05:51:21.317557 | orchestrator | Monday 29 September 2025 05:49:53 +0000 (0:00:36.668) 0:01:42.753 ****** 2025-09-29 05:51:21.317570 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:21.317582 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:21.317594 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:21.317606 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:21.317618 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:21.317631 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:21.317643 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:21.317655 | orchestrator | 2025-09-29 05:51:21.317667 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-29 05:51:21.317679 | orchestrator | Monday 29 September 2025 05:51:07 +0000 (0:01:14.176) 0:02:56.929 ****** 2025-09-29 05:51:21.317692 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:21.317704 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:21.317717 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:21.317727 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:21.317738 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:21.317749 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:21.317759 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:21.317770 | orchestrator | 2025-09-29 05:51:21.317781 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-29 05:51:21.317816 | orchestrator | Monday 29 September 2025 05:51:09 +0000 (0:00:01.685) 0:02:58.615 ****** 2025-09-29 05:51:21.317828 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:21.317839 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:21.317849 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:21.317860 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:21.317870 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:21.317881 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:21.317891 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:21.317902 | orchestrator | 2025-09-29 05:51:21.317913 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-29 05:51:21.317924 | orchestrator | Monday 29 September 2025 05:51:20 +0000 (0:00:10.941) 0:03:09.556 ****** 2025-09-29 05:51:21.317943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-29 05:51:21.317959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-29 05:51:21.317996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-29 05:51:21.318071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-29 05:51:21.318095 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-29 05:51:21.318107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-29 05:51:21.318118 | orchestrator | 2025-09-29 05:51:21.318129 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-29 05:51:21.318140 | orchestrator | Monday 29 September 2025 05:51:20 +0000 (0:00:00.333) 0:03:09.890 ****** 2025-09-29 05:51:21.318150 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-29 05:51:21.318161 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:21.318172 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-29 05:51:21.318183 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-29 05:51:21.318194 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:21.318204 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:21.318215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-29 05:51:21.318225 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:21.318236 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 05:51:21.318247 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 05:51:21.318257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 05:51:21.318268 | orchestrator | 2025-09-29 05:51:21.318278 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-29 05:51:21.318289 | orchestrator | Monday 29 September 2025 05:51:21 +0000 (0:00:00.627) 0:03:10.517 ****** 2025-09-29 05:51:21.318300 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-29 05:51:21.318312 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-29 05:51:21.318322 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-29 05:51:21.318333 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-29 05:51:21.318344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-29 05:51:21.318354 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-29 05:51:21.318365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-29 05:51:21.318376 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-29 05:51:21.318387 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-29 05:51:21.318398 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-29 05:51:21.318408 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-29 05:51:21.318419 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-29 05:51:21.318436 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-29 05:51:21.318447 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:21.318457 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-29 05:51:21.318475 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-29 05:51:21.318486 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-29 05:51:21.318496 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-29 05:51:21.318515 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-29 05:51:28.582969 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-29 05:51:28.583078 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-29 05:51:28.583094 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-29 05:51:28.583106 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-29 05:51:28.583135 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-29 05:51:28.583147 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:28.583160 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-29 05:51:28.583171 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-29 05:51:28.583182 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-29 05:51:28.583192 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-29 05:51:28.583208 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-29 05:51:28.583219 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-29 05:51:28.583230 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-29 05:51:28.583241 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-29 05:51:28.583252 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-29 05:51:28.583262 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-29 05:51:28.583273 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-29 05:51:28.583284 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-29 05:51:28.583295 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-29 05:51:28.583306 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-29 05:51:28.583317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-29 05:51:28.583328 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:28.583338 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-29 05:51:28.583349 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-29 05:51:28.583360 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:28.583371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-29 05:51:28.583382 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-29 05:51:28.583417 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-29 05:51:28.583428 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-29 05:51:28.583439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-29 05:51:28.583449 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-29 05:51:28.583460 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-29 05:51:28.583471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583482 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-29 05:51:28.583505 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583517 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583530 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-29 05:51:28.583542 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-29 05:51:28.583555 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-29 05:51:28.583567 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-29 05:51:28.583580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-29 05:51:28.583593 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-29 05:51:28.583623 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583636 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-29 05:51:28.583650 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-29 05:51:28.583662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-29 05:51:28.583675 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-29 05:51:28.583693 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-29 05:51:28.583706 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-29 05:51:28.583718 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-29 05:51:28.583731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-29 05:51:28.583743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-29 05:51:28.583756 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-29 05:51:28.583769 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-29 05:51:28.583781 | orchestrator | 2025-09-29 05:51:28.583816 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-29 05:51:28.583829 | orchestrator | Monday 29 September 2025 05:51:26 +0000 (0:00:05.595) 0:03:16.112 ****** 2025-09-29 05:51:28.583842 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583855 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583865 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583884 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583895 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583906 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583916 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-29 05:51:28.583927 | orchestrator | 2025-09-29 05:51:28.583938 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-29 05:51:28.583949 | orchestrator | Monday 29 September 2025 05:51:27 +0000 (0:00:00.690) 0:03:16.803 ****** 2025-09-29 05:51:28.583960 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.583971 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:28.583982 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.583992 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:51:28.584003 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584014 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:51:28.584025 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584036 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:51:28.584047 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584058 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584069 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584079 | orchestrator | 2025-09-29 05:51:28.584090 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2025-09-29 05:51:28.584101 | orchestrator | Monday 29 September 2025 05:51:27 +0000 (0:00:00.512) 0:03:17.316 ****** 2025-09-29 05:51:28.584112 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584133 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:28.584144 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:28.584155 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-29 05:51:28.584177 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:28.584187 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:28.584198 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584209 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-29 05:51:28.584231 | orchestrator | 2025-09-29 05:51:28.584248 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-29 05:51:41.877995 | orchestrator | Monday 29 September 2025 05:51:28 +0000 (0:00:00.604) 0:03:17.920 ****** 2025-09-29 05:51:41.878173 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-29 05:51:41.878193 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:41.878206 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-29 05:51:41.878218 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-29 05:51:41.878271 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:51:41.878284 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:51:41.878295 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-29 05:51:41.878306 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:51:41.878317 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-29 05:51:41.878328 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-29 05:51:41.878339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-29 05:51:41.878350 | orchestrator | 2025-09-29 05:51:41.878363 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-29 05:51:41.878374 | orchestrator | Monday 29 September 2025 05:51:30 +0000 (0:00:01.526) 0:03:19.447 ****** 2025-09-29 05:51:41.878385 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:41.878396 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:41.878407 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:41.878418 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:41.878428 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:51:41.878439 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:51:41.878450 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:51:41.878461 | orchestrator | 2025-09-29 05:51:41.878472 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-29 05:51:41.878483 | orchestrator | Monday 29 September 2025 05:51:30 +0000 (0:00:00.247) 0:03:19.695 ****** 2025-09-29 05:51:41.878493 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:41.878506 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:41.878516 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:41.878527 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:41.878538 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:41.878548 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:41.878559 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:41.878570 | orchestrator | 2025-09-29 05:51:41.878581 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-29 05:51:41.878592 | orchestrator | Monday 29 September 2025 05:51:36 +0000 (0:00:05.821) 0:03:25.516 ****** 2025-09-29 05:51:41.878603 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-29 05:51:41.878614 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:41.878625 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-29 05:51:41.878635 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:41.878646 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-29 05:51:41.878657 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:41.878667 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-29 05:51:41.878678 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:41.878689 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-29 05:51:41.878699 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-29 05:51:41.878710 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:51:41.878721 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:51:41.878731 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-29 05:51:41.878742 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:51:41.878753 | orchestrator | 2025-09-29 05:51:41.878763 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-29 05:51:41.878774 | orchestrator | Monday 29 September 2025 05:51:36 +0000 (0:00:00.270) 0:03:25.787 ****** 2025-09-29 05:51:41.878785 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-29 05:51:41.878796 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-29 05:51:41.878806 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-29 05:51:41.878856 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-29 05:51:41.878876 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-29 05:51:41.878887 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-29 05:51:41.878898 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-29 05:51:41.878908 | orchestrator | 2025-09-29 05:51:41.878919 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-29 05:51:41.878930 | orchestrator | Monday 29 September 2025 05:51:37 +0000 (0:00:00.998) 0:03:26.785 ****** 2025-09-29 05:51:41.878943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:51:41.878957 | orchestrator | 2025-09-29 05:51:41.878968 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-29 05:51:41.878980 | orchestrator | Monday 29 September 2025 05:51:37 +0000 (0:00:00.396) 0:03:27.182 ****** 2025-09-29 05:51:41.878990 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:41.879001 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:41.879012 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:41.879023 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:41.879034 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:41.879045 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:41.879055 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:41.879066 | orchestrator | 2025-09-29 05:51:41.879077 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-29 05:51:41.879088 | orchestrator | Monday 29 September 2025 05:51:39 +0000 (0:00:01.190) 0:03:28.373 ****** 2025-09-29 05:51:41.879099 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:41.879129 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:41.879140 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:41.879151 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:41.879162 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:41.879173 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:41.879184 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:41.879194 | orchestrator | 2025-09-29 05:51:41.879206 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-29 05:51:41.879217 | orchestrator | Monday 29 September 2025 05:51:39 +0000 (0:00:00.614) 0:03:28.988 ****** 2025-09-29 05:51:41.879228 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:41.879239 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:41.879249 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:41.879261 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:41.879272 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:41.879283 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:41.879293 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:41.879304 | orchestrator | 2025-09-29 05:51:41.879315 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-29 05:51:41.879326 | orchestrator | Monday 29 September 2025 05:51:40 +0000 (0:00:00.658) 0:03:29.646 ****** 2025-09-29 05:51:41.879337 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:41.879348 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:41.879359 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:41.879370 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:41.879381 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:41.879392 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:41.879403 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:41.879413 | orchestrator | 2025-09-29 05:51:41.879424 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-29 05:51:41.879435 | orchestrator | Monday 29 September 2025 05:51:40 +0000 (0:00:00.585) 0:03:30.232 ****** 2025-09-29 05:51:41.879451 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123711.8881645, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:41.879479 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123736.0167599, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:41.879492 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123735.2898102, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:41.879504 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123747.9730995, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:41.879515 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123747.4154887, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:41.879535 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123749.083949, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653319 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1759123748.3338945, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653440 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653488 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653502 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653514 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653526 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653537 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653583 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 05:51:56.653597 | orchestrator | 2025-09-29 05:51:56.653610 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-29 05:51:56.653623 | orchestrator | Monday 29 September 2025 05:51:41 +0000 (0:00:00.974) 0:03:31.206 ****** 2025-09-29 05:51:56.653634 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:56.653655 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:56.653666 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:56.653677 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:56.653687 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:56.653698 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:56.653709 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:56.653720 | orchestrator | 2025-09-29 05:51:56.653731 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-29 05:51:56.653742 | orchestrator | Monday 29 September 2025 05:51:42 +0000 (0:00:01.110) 0:03:32.316 ****** 2025-09-29 05:51:56.653753 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:56.653763 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:56.653774 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:56.653806 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:56.653846 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:56.653859 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:56.653871 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:56.653884 | orchestrator | 2025-09-29 05:51:56.653896 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-29 05:51:56.653908 | orchestrator | Monday 29 September 2025 05:51:44 +0000 (0:00:01.225) 0:03:33.542 ****** 2025-09-29 05:51:56.653920 | orchestrator | changed: [testbed-manager] 2025-09-29 05:51:56.653932 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:56.653944 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:56.653956 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:56.653968 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:56.653980 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:56.653992 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:56.654003 | orchestrator | 2025-09-29 05:51:56.654014 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-29 05:51:56.654084 | orchestrator | Monday 29 September 2025 05:51:45 +0000 (0:00:01.199) 0:03:34.742 ****** 2025-09-29 05:51:56.654095 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:51:56.654106 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:51:56.654116 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:51:56.654160 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:51:56.654171 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:51:56.654182 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:51:56.654193 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:51:56.654203 | orchestrator | 2025-09-29 05:51:56.654215 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-29 05:51:56.654226 | orchestrator | Monday 29 September 2025 05:51:45 +0000 (0:00:00.273) 0:03:35.016 ****** 2025-09-29 05:51:56.654237 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:56.654248 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:56.654259 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:56.654270 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:56.654281 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:56.654291 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:56.654302 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:56.654312 | orchestrator | 2025-09-29 05:51:56.654323 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-29 05:51:56.654334 | orchestrator | Monday 29 September 2025 05:51:46 +0000 (0:00:00.769) 0:03:35.785 ****** 2025-09-29 05:51:56.654347 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:51:56.654360 | orchestrator | 2025-09-29 05:51:56.654371 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-29 05:51:56.654382 | orchestrator | Monday 29 September 2025 05:51:46 +0000 (0:00:00.435) 0:03:36.221 ****** 2025-09-29 05:51:56.654393 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:56.654404 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:51:56.654423 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:51:56.654434 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:51:56.654444 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:51:56.654455 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:51:56.654466 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:51:56.654477 | orchestrator | 2025-09-29 05:51:56.654488 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-29 05:51:56.654499 | orchestrator | Monday 29 September 2025 05:51:54 +0000 (0:00:07.485) 0:03:43.706 ****** 2025-09-29 05:51:56.654509 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:56.654520 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:56.654531 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:56.654542 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:56.654552 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:56.654563 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:56.654573 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:56.654584 | orchestrator | 2025-09-29 05:51:56.654595 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-29 05:51:56.654606 | orchestrator | Monday 29 September 2025 05:51:55 +0000 (0:00:01.259) 0:03:44.966 ****** 2025-09-29 05:51:56.654616 | orchestrator | ok: [testbed-manager] 2025-09-29 05:51:56.654627 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:51:56.654638 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:51:56.654648 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:51:56.654659 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:51:56.654669 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:51:56.654680 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:51:56.654690 | orchestrator | 2025-09-29 05:51:56.654716 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-29 05:53:02.910810 | orchestrator | Monday 29 September 2025 05:51:56 +0000 (0:00:01.015) 0:03:45.981 ****** 2025-09-29 05:53:02.910935 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.910945 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.910951 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.910957 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.910963 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:02.910968 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:02.910974 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:02.910980 | orchestrator | 2025-09-29 05:53:02.910987 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-29 05:53:02.910993 | orchestrator | Monday 29 September 2025 05:51:56 +0000 (0:00:00.309) 0:03:46.291 ****** 2025-09-29 05:53:02.910999 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.911004 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.911009 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.911015 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.911020 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:02.911025 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:02.911031 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:02.911036 | orchestrator | 2025-09-29 05:53:02.911041 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-29 05:53:02.911047 | orchestrator | Monday 29 September 2025 05:51:57 +0000 (0:00:00.443) 0:03:46.734 ****** 2025-09-29 05:53:02.911053 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.911058 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.911063 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.911069 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.911075 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:02.911080 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:02.911085 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:02.911091 | orchestrator | 2025-09-29 05:53:02.911096 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-29 05:53:02.911102 | orchestrator | Monday 29 September 2025 05:51:57 +0000 (0:00:00.307) 0:03:47.042 ****** 2025-09-29 05:53:02.911107 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:02.911131 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.911136 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.911141 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.911147 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:02.911152 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.911157 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:02.911163 | orchestrator | 2025-09-29 05:53:02.911168 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-29 05:53:02.911174 | orchestrator | Monday 29 September 2025 05:52:03 +0000 (0:00:05.782) 0:03:52.824 ****** 2025-09-29 05:53:02.911181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:53:02.911188 | orchestrator | 2025-09-29 05:53:02.911193 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-29 05:53:02.911199 | orchestrator | Monday 29 September 2025 05:52:03 +0000 (0:00:00.369) 0:03:53.194 ****** 2025-09-29 05:53:02.911204 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911210 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-29 05:53:02.911215 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911221 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-29 05:53:02.911226 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:02.911232 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:02.911237 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911242 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-29 05:53:02.911248 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911253 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:02.911258 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-29 05:53:02.911264 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911269 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:02.911274 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-29 05:53:02.911280 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911285 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-29 05:53:02.911291 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:02.911296 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:02.911301 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-29 05:53:02.911306 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-29 05:53:02.911312 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:02.911317 | orchestrator | 2025-09-29 05:53:02.911322 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-29 05:53:02.911328 | orchestrator | Monday 29 September 2025 05:52:04 +0000 (0:00:00.306) 0:03:53.500 ****** 2025-09-29 05:53:02.911333 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:53:02.911339 | orchestrator | 2025-09-29 05:53:02.911344 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-29 05:53:02.911350 | orchestrator | Monday 29 September 2025 05:52:04 +0000 (0:00:00.357) 0:03:53.858 ****** 2025-09-29 05:53:02.911355 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-29 05:53:02.911360 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:02.911366 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-29 05:53:02.911372 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:02.911379 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-29 05:53:02.911403 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-29 05:53:02.911410 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:02.911417 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-29 05:53:02.911423 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:02.911429 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:02.911436 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-29 05:53:02.911443 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:02.911449 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-29 05:53:02.911455 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:02.911461 | orchestrator | 2025-09-29 05:53:02.911468 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-29 05:53:02.911474 | orchestrator | Monday 29 September 2025 05:52:04 +0000 (0:00:00.260) 0:03:54.119 ****** 2025-09-29 05:53:02.911481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:53:02.911488 | orchestrator | 2025-09-29 05:53:02.911494 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-29 05:53:02.911501 | orchestrator | Monday 29 September 2025 05:52:05 +0000 (0:00:00.334) 0:03:54.453 ****** 2025-09-29 05:53:02.911507 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:02.911513 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:02.911519 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:02.911526 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:02.911532 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:02.911538 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:02.911544 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:02.911551 | orchestrator | 2025-09-29 05:53:02.911557 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-29 05:53:02.911563 | orchestrator | Monday 29 September 2025 05:52:38 +0000 (0:00:32.982) 0:04:27.436 ****** 2025-09-29 05:53:02.911569 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:02.911575 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:02.911582 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:02.911588 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:02.911595 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:02.911601 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:02.911607 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:02.911614 | orchestrator | 2025-09-29 05:53:02.911620 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-29 05:53:02.911627 | orchestrator | Monday 29 September 2025 05:52:45 +0000 (0:00:07.858) 0:04:35.294 ****** 2025-09-29 05:53:02.911634 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:02.911640 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:02.911646 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:02.911665 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:02.911671 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:02.911678 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:02.911684 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:02.911690 | orchestrator | 2025-09-29 05:53:02.911697 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-29 05:53:02.911703 | orchestrator | Monday 29 September 2025 05:52:53 +0000 (0:00:07.539) 0:04:42.834 ****** 2025-09-29 05:53:02.911710 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.911716 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.911723 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.911729 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.911734 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:02.911740 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:02.911745 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:02.911754 | orchestrator | 2025-09-29 05:53:02.911760 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-29 05:53:02.911765 | orchestrator | Monday 29 September 2025 05:52:55 +0000 (0:00:01.580) 0:04:44.414 ****** 2025-09-29 05:53:02.911771 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:02.911776 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:02.911782 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:02.911787 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:02.911792 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:02.911798 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:02.911803 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:02.911808 | orchestrator | 2025-09-29 05:53:02.911814 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-29 05:53:02.911819 | orchestrator | Monday 29 September 2025 05:53:00 +0000 (0:00:05.271) 0:04:49.686 ****** 2025-09-29 05:53:02.911825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:53:02.911832 | orchestrator | 2025-09-29 05:53:02.911837 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-29 05:53:02.911843 | orchestrator | Monday 29 September 2025 05:53:00 +0000 (0:00:00.423) 0:04:50.109 ****** 2025-09-29 05:53:02.911848 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:02.911854 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:02.911859 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:02.911864 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:02.911869 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:02.911888 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:02.911894 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:02.911899 | orchestrator | 2025-09-29 05:53:02.911904 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-29 05:53:02.911910 | orchestrator | Monday 29 September 2025 05:53:01 +0000 (0:00:00.637) 0:04:50.747 ****** 2025-09-29 05:53:02.911915 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:02.911921 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:02.911926 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:02.911932 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:02.911944 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:16.640858 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:16.641009 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:16.641023 | orchestrator | 2025-09-29 05:53:16.641035 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-29 05:53:16.641045 | orchestrator | Monday 29 September 2025 05:53:02 +0000 (0:00:01.493) 0:04:52.240 ****** 2025-09-29 05:53:16.641053 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:16.641062 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:16.641070 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:16.641078 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:16.641085 | orchestrator | changed: [testbed-manager] 2025-09-29 05:53:16.641093 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:16.641101 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:16.641109 | orchestrator | 2025-09-29 05:53:16.641117 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-29 05:53:16.641125 | orchestrator | Monday 29 September 2025 05:53:03 +0000 (0:00:00.693) 0:04:52.934 ****** 2025-09-29 05:53:16.641133 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.641141 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.641149 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.641157 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.641165 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.641173 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.641181 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.641188 | orchestrator | 2025-09-29 05:53:16.641218 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-29 05:53:16.641226 | orchestrator | Monday 29 September 2025 05:53:03 +0000 (0:00:00.218) 0:04:53.153 ****** 2025-09-29 05:53:16.641234 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.641242 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.641250 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.641258 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.641265 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.641273 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.641282 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.641290 | orchestrator | 2025-09-29 05:53:16.641297 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-29 05:53:16.641306 | orchestrator | Monday 29 September 2025 05:53:04 +0000 (0:00:00.350) 0:04:53.503 ****** 2025-09-29 05:53:16.641313 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:16.641321 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:16.641329 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:16.641337 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:16.641345 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:16.641352 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:16.641360 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:16.641368 | orchestrator | 2025-09-29 05:53:16.641376 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-29 05:53:16.641384 | orchestrator | Monday 29 September 2025 05:53:04 +0000 (0:00:00.257) 0:04:53.760 ****** 2025-09-29 05:53:16.641392 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.641400 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.641407 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.641415 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.641423 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.641431 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.641439 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.641446 | orchestrator | 2025-09-29 05:53:16.641454 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-29 05:53:16.641463 | orchestrator | Monday 29 September 2025 05:53:04 +0000 (0:00:00.237) 0:04:53.998 ****** 2025-09-29 05:53:16.641471 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:16.641479 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:16.641486 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:16.641494 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:16.641502 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:16.641509 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:16.641517 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:16.641525 | orchestrator | 2025-09-29 05:53:16.641533 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-29 05:53:16.641541 | orchestrator | Monday 29 September 2025 05:53:04 +0000 (0:00:00.249) 0:04:54.247 ****** 2025-09-29 05:53:16.641548 | orchestrator | ok: [testbed-manager] =>  2025-09-29 05:53:16.641556 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641564 | orchestrator | ok: [testbed-node-3] =>  2025-09-29 05:53:16.641572 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641580 | orchestrator | ok: [testbed-node-4] =>  2025-09-29 05:53:16.641587 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641595 | orchestrator | ok: [testbed-node-5] =>  2025-09-29 05:53:16.641603 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641611 | orchestrator | ok: [testbed-node-0] =>  2025-09-29 05:53:16.641618 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641626 | orchestrator | ok: [testbed-node-1] =>  2025-09-29 05:53:16.641634 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641642 | orchestrator | ok: [testbed-node-2] =>  2025-09-29 05:53:16.641649 | orchestrator |  docker_version: 5:27.5.1 2025-09-29 05:53:16.641657 | orchestrator | 2025-09-29 05:53:16.641665 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-29 05:53:16.641673 | orchestrator | Monday 29 September 2025 05:53:05 +0000 (0:00:00.242) 0:04:54.490 ****** 2025-09-29 05:53:16.641688 | orchestrator | ok: [testbed-manager] =>  2025-09-29 05:53:16.641695 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641703 | orchestrator | ok: [testbed-node-3] =>  2025-09-29 05:53:16.641711 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641719 | orchestrator | ok: [testbed-node-4] =>  2025-09-29 05:53:16.641727 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641734 | orchestrator | ok: [testbed-node-5] =>  2025-09-29 05:53:16.641742 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641750 | orchestrator | ok: [testbed-node-0] =>  2025-09-29 05:53:16.641757 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641765 | orchestrator | ok: [testbed-node-1] =>  2025-09-29 05:53:16.641773 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641781 | orchestrator | ok: [testbed-node-2] =>  2025-09-29 05:53:16.641788 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-29 05:53:16.641796 | orchestrator | 2025-09-29 05:53:16.641804 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-29 05:53:16.641838 | orchestrator | Monday 29 September 2025 05:53:05 +0000 (0:00:00.309) 0:04:54.799 ****** 2025-09-29 05:53:16.641848 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.641856 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.641863 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.641871 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.641879 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.641905 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.641913 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.641921 | orchestrator | 2025-09-29 05:53:16.641928 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-29 05:53:16.641936 | orchestrator | Monday 29 September 2025 05:53:05 +0000 (0:00:00.341) 0:04:55.140 ****** 2025-09-29 05:53:16.641944 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.641952 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.641960 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.641968 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.641975 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.641983 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.641991 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.641999 | orchestrator | 2025-09-29 05:53:16.642007 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-29 05:53:16.642015 | orchestrator | Monday 29 September 2025 05:53:06 +0000 (0:00:00.278) 0:04:55.419 ****** 2025-09-29 05:53:16.642067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:53:16.642077 | orchestrator | 2025-09-29 05:53:16.642085 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-29 05:53:16.642093 | orchestrator | Monday 29 September 2025 05:53:06 +0000 (0:00:00.458) 0:04:55.878 ****** 2025-09-29 05:53:16.642108 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:16.642116 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:16.642124 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:16.642132 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:16.642140 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:16.642147 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:16.642155 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:16.642163 | orchestrator | 2025-09-29 05:53:16.642171 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-29 05:53:16.642179 | orchestrator | Monday 29 September 2025 05:53:07 +0000 (0:00:00.791) 0:04:56.670 ****** 2025-09-29 05:53:16.642186 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:53:16.642194 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:53:16.642202 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:16.642217 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:53:16.642225 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:53:16.642232 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:53:16.642240 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:53:16.642248 | orchestrator | 2025-09-29 05:53:16.642256 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-29 05:53:16.642265 | orchestrator | Monday 29 September 2025 05:53:10 +0000 (0:00:03.043) 0:04:59.713 ****** 2025-09-29 05:53:16.642273 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-29 05:53:16.642281 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-29 05:53:16.642289 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-29 05:53:16.642297 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-29 05:53:16.642305 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-29 05:53:16.642313 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-29 05:53:16.642321 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:53:16.642329 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-29 05:53:16.642336 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-29 05:53:16.642344 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-29 05:53:16.642352 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:53:16.642360 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-29 05:53:16.642368 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-29 05:53:16.642375 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-29 05:53:16.642383 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:53:16.642391 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-29 05:53:16.642398 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-29 05:53:16.642406 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-29 05:53:16.642414 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:53:16.642422 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-29 05:53:16.642430 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-29 05:53:16.642437 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-29 05:53:16.642445 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:53:16.642453 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:53:16.642461 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-29 05:53:16.642469 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-29 05:53:16.642476 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-29 05:53:16.642484 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:53:16.642492 | orchestrator | 2025-09-29 05:53:16.642500 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-29 05:53:16.642508 | orchestrator | Monday 29 September 2025 05:53:10 +0000 (0:00:00.541) 0:05:00.255 ****** 2025-09-29 05:53:16.642516 | orchestrator | ok: [testbed-manager] 2025-09-29 05:53:16.642523 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:53:16.642531 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:53:16.642539 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:53:16.642547 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:53:16.642554 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:53:16.642562 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:53:16.642570 | orchestrator | 2025-09-29 05:53:16.642588 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-29 05:54:07.054120 | orchestrator | Monday 29 September 2025 05:53:16 +0000 (0:00:05.714) 0:05:05.969 ****** 2025-09-29 05:54:07.054238 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.054255 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.054267 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.054279 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.054315 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.054327 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.054338 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.054349 | orchestrator | 2025-09-29 05:54:07.054362 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-29 05:54:07.054374 | orchestrator | Monday 29 September 2025 05:53:17 +0000 (0:00:01.070) 0:05:07.040 ****** 2025-09-29 05:54:07.054385 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.054396 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.054406 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.054417 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.054429 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.054440 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.054450 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.054461 | orchestrator | 2025-09-29 05:54:07.054472 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-29 05:54:07.054483 | orchestrator | Monday 29 September 2025 05:53:24 +0000 (0:00:07.026) 0:05:14.067 ****** 2025-09-29 05:54:07.054494 | orchestrator | changed: [testbed-manager] 2025-09-29 05:54:07.054505 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.054515 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.054526 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.054537 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.054547 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.054557 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.054568 | orchestrator | 2025-09-29 05:54:07.054579 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-29 05:54:07.054590 | orchestrator | Monday 29 September 2025 05:53:28 +0000 (0:00:03.386) 0:05:17.454 ****** 2025-09-29 05:54:07.054601 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.054614 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.054627 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.054640 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.054653 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.054666 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.054678 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.054691 | orchestrator | 2025-09-29 05:54:07.054704 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-29 05:54:07.054717 | orchestrator | Monday 29 September 2025 05:53:29 +0000 (0:00:01.357) 0:05:18.812 ****** 2025-09-29 05:54:07.054730 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.054743 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.054756 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.054769 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.054781 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.054794 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.054807 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.054820 | orchestrator | 2025-09-29 05:54:07.054834 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-29 05:54:07.054846 | orchestrator | Monday 29 September 2025 05:53:30 +0000 (0:00:01.434) 0:05:20.247 ****** 2025-09-29 05:54:07.054859 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:07.054873 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:07.054885 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:07.054922 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:07.054935 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:07.054948 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:07.054962 | orchestrator | changed: [testbed-manager] 2025-09-29 05:54:07.054973 | orchestrator | 2025-09-29 05:54:07.054984 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-29 05:54:07.054995 | orchestrator | Monday 29 September 2025 05:53:31 +0000 (0:00:00.616) 0:05:20.864 ****** 2025-09-29 05:54:07.055006 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.055025 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.055036 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.055046 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.055057 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.055068 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.055079 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.055089 | orchestrator | 2025-09-29 05:54:07.055100 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-29 05:54:07.055111 | orchestrator | Monday 29 September 2025 05:53:40 +0000 (0:00:09.404) 0:05:30.268 ****** 2025-09-29 05:54:07.055122 | orchestrator | changed: [testbed-manager] 2025-09-29 05:54:07.055133 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.055144 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.055155 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.055165 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.055176 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.055187 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.055198 | orchestrator | 2025-09-29 05:54:07.055209 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-29 05:54:07.055219 | orchestrator | Monday 29 September 2025 05:53:41 +0000 (0:00:00.809) 0:05:31.078 ****** 2025-09-29 05:54:07.055230 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.055241 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.055252 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.055263 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.055273 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.055284 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.055295 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.055306 | orchestrator | 2025-09-29 05:54:07.055316 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-29 05:54:07.055327 | orchestrator | Monday 29 September 2025 05:53:50 +0000 (0:00:08.378) 0:05:39.457 ****** 2025-09-29 05:54:07.055338 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.055349 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.055360 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.055371 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.055396 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.055407 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.055436 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.055448 | orchestrator | 2025-09-29 05:54:07.055459 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-29 05:54:07.055470 | orchestrator | Monday 29 September 2025 05:54:00 +0000 (0:00:10.189) 0:05:49.646 ****** 2025-09-29 05:54:07.055481 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-29 05:54:07.055492 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-29 05:54:07.055503 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-29 05:54:07.055514 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-29 05:54:07.055525 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-29 05:54:07.055536 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-29 05:54:07.055547 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-29 05:54:07.055557 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-29 05:54:07.055568 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-29 05:54:07.055579 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-29 05:54:07.055590 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-29 05:54:07.055600 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-29 05:54:07.055611 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-29 05:54:07.055622 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-29 05:54:07.055633 | orchestrator | 2025-09-29 05:54:07.055644 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-29 05:54:07.055662 | orchestrator | Monday 29 September 2025 05:54:01 +0000 (0:00:01.088) 0:05:50.734 ****** 2025-09-29 05:54:07.055673 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:07.055683 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:07.055694 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:07.055705 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:07.055716 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:07.055727 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:07.055738 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:07.055748 | orchestrator | 2025-09-29 05:54:07.055759 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-29 05:54:07.055770 | orchestrator | Monday 29 September 2025 05:54:01 +0000 (0:00:00.448) 0:05:51.182 ****** 2025-09-29 05:54:07.055781 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:07.055792 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:07.055803 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:07.055814 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:07.055824 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:07.055835 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:07.055846 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:07.055857 | orchestrator | 2025-09-29 05:54:07.055867 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-29 05:54:07.055879 | orchestrator | Monday 29 September 2025 05:54:05 +0000 (0:00:03.415) 0:05:54.598 ****** 2025-09-29 05:54:07.055890 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:07.055929 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:07.055940 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:07.055951 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:07.055961 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:07.055972 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:07.055983 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:07.055993 | orchestrator | 2025-09-29 05:54:07.056005 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-29 05:54:07.056016 | orchestrator | Monday 29 September 2025 05:54:05 +0000 (0:00:00.511) 0:05:55.109 ****** 2025-09-29 05:54:07.056027 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-29 05:54:07.056038 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-29 05:54:07.056049 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:07.056060 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-29 05:54:07.056071 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-29 05:54:07.056082 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:07.056093 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-29 05:54:07.056104 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-29 05:54:07.056115 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:07.056126 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-29 05:54:07.056136 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-29 05:54:07.056147 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:07.056158 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-29 05:54:07.056169 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-29 05:54:07.056179 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:07.056190 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-29 05:54:07.056201 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-29 05:54:07.056212 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:07.056222 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-29 05:54:07.056233 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-29 05:54:07.056244 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:07.056262 | orchestrator | 2025-09-29 05:54:07.056273 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-29 05:54:07.056284 | orchestrator | Monday 29 September 2025 05:54:06 +0000 (0:00:00.775) 0:05:55.885 ****** 2025-09-29 05:54:07.056295 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:07.056305 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:07.056316 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:07.056356 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:07.056368 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:07.056379 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:07.056390 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:07.056401 | orchestrator | 2025-09-29 05:54:07.056418 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-29 05:54:26.495103 | orchestrator | Monday 29 September 2025 05:54:07 +0000 (0:00:00.499) 0:05:56.384 ****** 2025-09-29 05:54:26.495262 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:26.495283 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:26.495300 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:26.495319 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:26.495337 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:26.495367 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:26.495385 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:26.495404 | orchestrator | 2025-09-29 05:54:26.495426 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-29 05:54:26.495446 | orchestrator | Monday 29 September 2025 05:54:07 +0000 (0:00:00.478) 0:05:56.862 ****** 2025-09-29 05:54:26.495465 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:26.495477 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:26.495488 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:26.495499 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:26.495509 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:26.495520 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:26.495531 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:26.495541 | orchestrator | 2025-09-29 05:54:26.495552 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-29 05:54:26.495563 | orchestrator | Monday 29 September 2025 05:54:07 +0000 (0:00:00.435) 0:05:57.298 ****** 2025-09-29 05:54:26.495574 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.495586 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.495597 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.495607 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.495618 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.495629 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.495642 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.495654 | orchestrator | 2025-09-29 05:54:26.495666 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-29 05:54:26.495679 | orchestrator | Monday 29 September 2025 05:54:09 +0000 (0:00:01.574) 0:05:58.872 ****** 2025-09-29 05:54:26.495692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:54:26.495706 | orchestrator | 2025-09-29 05:54:26.495719 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-29 05:54:26.495731 | orchestrator | Monday 29 September 2025 05:54:10 +0000 (0:00:00.861) 0:05:59.733 ****** 2025-09-29 05:54:26.495743 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.495755 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.495767 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:26.495779 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.495791 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.495803 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.495816 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:26.495835 | orchestrator | 2025-09-29 05:54:26.495918 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-29 05:54:26.495944 | orchestrator | Monday 29 September 2025 05:54:11 +0000 (0:00:00.725) 0:06:00.459 ****** 2025-09-29 05:54:26.495964 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.495977 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.495990 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:26.496002 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.496015 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.496028 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.496039 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:26.496052 | orchestrator | 2025-09-29 05:54:26.496063 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-29 05:54:26.496074 | orchestrator | Monday 29 September 2025 05:54:11 +0000 (0:00:00.753) 0:06:01.213 ****** 2025-09-29 05:54:26.496085 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.496095 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.496106 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:26.496116 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.496127 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.496137 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.496148 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:26.496158 | orchestrator | 2025-09-29 05:54:26.496169 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-29 05:54:26.496180 | orchestrator | Monday 29 September 2025 05:54:13 +0000 (0:00:01.395) 0:06:02.608 ****** 2025-09-29 05:54:26.496191 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:26.496201 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.496212 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.496222 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.496233 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.496244 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.496254 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.496265 | orchestrator | 2025-09-29 05:54:26.496276 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-29 05:54:26.496286 | orchestrator | Monday 29 September 2025 05:54:14 +0000 (0:00:01.312) 0:06:03.921 ****** 2025-09-29 05:54:26.496304 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.496330 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.496352 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:26.496371 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.496391 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.496408 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.496425 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:26.496436 | orchestrator | 2025-09-29 05:54:26.496447 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-29 05:54:26.496458 | orchestrator | Monday 29 September 2025 05:54:15 +0000 (0:00:01.201) 0:06:05.123 ****** 2025-09-29 05:54:26.496468 | orchestrator | changed: [testbed-manager] 2025-09-29 05:54:26.496479 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:26.496489 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.496500 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.496511 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.496522 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.496532 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:26.496543 | orchestrator | 2025-09-29 05:54:26.496572 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-29 05:54:26.496584 | orchestrator | Monday 29 September 2025 05:54:17 +0000 (0:00:01.252) 0:06:06.376 ****** 2025-09-29 05:54:26.496595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:54:26.496606 | orchestrator | 2025-09-29 05:54:26.496617 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-29 05:54:26.496639 | orchestrator | Monday 29 September 2025 05:54:17 +0000 (0:00:00.838) 0:06:07.214 ****** 2025-09-29 05:54:26.496650 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.496660 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.496671 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.496682 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.496692 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.496703 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.496713 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.496724 | orchestrator | 2025-09-29 05:54:26.496735 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-29 05:54:26.496745 | orchestrator | Monday 29 September 2025 05:54:19 +0000 (0:00:01.192) 0:06:08.407 ****** 2025-09-29 05:54:26.496756 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.496767 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.496777 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.496788 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.496798 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.496809 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.496822 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.496840 | orchestrator | 2025-09-29 05:54:26.496870 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-29 05:54:26.496924 | orchestrator | Monday 29 September 2025 05:54:20 +0000 (0:00:01.077) 0:06:09.485 ****** 2025-09-29 05:54:26.496945 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.496961 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.496971 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.496982 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.496992 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.497003 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.497013 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.497023 | orchestrator | 2025-09-29 05:54:26.497034 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-29 05:54:26.497045 | orchestrator | Monday 29 September 2025 05:54:21 +0000 (0:00:01.149) 0:06:10.634 ****** 2025-09-29 05:54:26.497055 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:26.497066 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:26.497076 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:26.497087 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.497097 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.497108 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.497118 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:26.497129 | orchestrator | 2025-09-29 05:54:26.497140 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-29 05:54:26.497151 | orchestrator | Monday 29 September 2025 05:54:22 +0000 (0:00:01.660) 0:06:12.295 ****** 2025-09-29 05:54:26.497162 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:54:26.497173 | orchestrator | 2025-09-29 05:54:26.497184 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497194 | orchestrator | Monday 29 September 2025 05:54:23 +0000 (0:00:00.951) 0:06:13.246 ****** 2025-09-29 05:54:26.497205 | orchestrator | 2025-09-29 05:54:26.497215 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497226 | orchestrator | Monday 29 September 2025 05:54:23 +0000 (0:00:00.035) 0:06:13.282 ****** 2025-09-29 05:54:26.497236 | orchestrator | 2025-09-29 05:54:26.497247 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497257 | orchestrator | Monday 29 September 2025 05:54:23 +0000 (0:00:00.039) 0:06:13.322 ****** 2025-09-29 05:54:26.497268 | orchestrator | 2025-09-29 05:54:26.497279 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497290 | orchestrator | Monday 29 September 2025 05:54:24 +0000 (0:00:00.034) 0:06:13.356 ****** 2025-09-29 05:54:26.497335 | orchestrator | 2025-09-29 05:54:26.497359 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497378 | orchestrator | Monday 29 September 2025 05:54:24 +0000 (0:00:00.035) 0:06:13.392 ****** 2025-09-29 05:54:26.497397 | orchestrator | 2025-09-29 05:54:26.497415 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497434 | orchestrator | Monday 29 September 2025 05:54:24 +0000 (0:00:00.039) 0:06:13.432 ****** 2025-09-29 05:54:26.497452 | orchestrator | 2025-09-29 05:54:26.497471 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-29 05:54:26.497482 | orchestrator | Monday 29 September 2025 05:54:24 +0000 (0:00:00.042) 0:06:13.474 ****** 2025-09-29 05:54:26.497493 | orchestrator | 2025-09-29 05:54:26.497504 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-29 05:54:26.497514 | orchestrator | Monday 29 September 2025 05:54:24 +0000 (0:00:00.039) 0:06:13.514 ****** 2025-09-29 05:54:26.497525 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:26.497536 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:26.497546 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:26.497556 | orchestrator | 2025-09-29 05:54:26.497567 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-29 05:54:26.497577 | orchestrator | Monday 29 September 2025 05:54:25 +0000 (0:00:01.099) 0:06:14.613 ****** 2025-09-29 05:54:26.497588 | orchestrator | changed: [testbed-manager] 2025-09-29 05:54:26.497599 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:26.497618 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:26.497630 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:26.497640 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:26.497661 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.174763 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.174951 | orchestrator | 2025-09-29 05:54:53.174982 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-29 05:54:53.175003 | orchestrator | Monday 29 September 2025 05:54:26 +0000 (0:00:01.211) 0:06:15.825 ****** 2025-09-29 05:54:53.175022 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:53.175039 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.175056 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.175072 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.175090 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.175107 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.175124 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.175141 | orchestrator | 2025-09-29 05:54:53.175159 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-29 05:54:53.175176 | orchestrator | Monday 29 September 2025 05:54:28 +0000 (0:00:02.410) 0:06:18.236 ****** 2025-09-29 05:54:53.175194 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:53.175209 | orchestrator | 2025-09-29 05:54:53.175227 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-29 05:54:53.175245 | orchestrator | Monday 29 September 2025 05:54:28 +0000 (0:00:00.099) 0:06:18.336 ****** 2025-09-29 05:54:53.175261 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.175281 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.175297 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.175314 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.175336 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.175354 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.175372 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.175390 | orchestrator | 2025-09-29 05:54:53.175406 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-29 05:54:53.175426 | orchestrator | Monday 29 September 2025 05:54:29 +0000 (0:00:00.898) 0:06:19.234 ****** 2025-09-29 05:54:53.175446 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:53.175464 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:53.175515 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:53.175534 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:53.175553 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:53.175569 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:53.175584 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:53.175600 | orchestrator | 2025-09-29 05:54:53.175617 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-29 05:54:53.175634 | orchestrator | Monday 29 September 2025 05:54:30 +0000 (0:00:00.462) 0:06:19.696 ****** 2025-09-29 05:54:53.175652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:54:53.175671 | orchestrator | 2025-09-29 05:54:53.175688 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-29 05:54:53.175705 | orchestrator | Monday 29 September 2025 05:54:31 +0000 (0:00:00.940) 0:06:20.637 ****** 2025-09-29 05:54:53.175722 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.175740 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:53.175757 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:53.175775 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:53.175793 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:53.175811 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:53.175828 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:53.175846 | orchestrator | 2025-09-29 05:54:53.175863 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-29 05:54:53.175880 | orchestrator | Monday 29 September 2025 05:54:32 +0000 (0:00:00.773) 0:06:21.410 ****** 2025-09-29 05:54:53.175926 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-29 05:54:53.175944 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-29 05:54:53.175961 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-29 05:54:53.175979 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-29 05:54:53.175997 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-29 05:54:53.176014 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-29 05:54:53.176031 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-29 05:54:53.176049 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-29 05:54:53.176067 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-29 05:54:53.176083 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-29 05:54:53.176100 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-29 05:54:53.176118 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-29 05:54:53.176134 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-29 05:54:53.176151 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-29 05:54:53.176168 | orchestrator | 2025-09-29 05:54:53.176185 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-29 05:54:53.176202 | orchestrator | Monday 29 September 2025 05:54:34 +0000 (0:00:02.270) 0:06:23.681 ****** 2025-09-29 05:54:53.176219 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:53.176236 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:53.176253 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:53.176271 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:53.176288 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:53.176305 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:53.176322 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:53.176339 | orchestrator | 2025-09-29 05:54:53.176357 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-29 05:54:53.176375 | orchestrator | Monday 29 September 2025 05:54:34 +0000 (0:00:00.522) 0:06:24.203 ****** 2025-09-29 05:54:53.176434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:54:53.176467 | orchestrator | 2025-09-29 05:54:53.176485 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-29 05:54:53.176503 | orchestrator | Monday 29 September 2025 05:54:35 +0000 (0:00:01.011) 0:06:25.214 ****** 2025-09-29 05:54:53.176520 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.176537 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:53.176555 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:53.176572 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:53.176588 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:53.176604 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:53.176621 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:53.176637 | orchestrator | 2025-09-29 05:54:53.176654 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-29 05:54:53.176670 | orchestrator | Monday 29 September 2025 05:54:36 +0000 (0:00:00.807) 0:06:26.022 ****** 2025-09-29 05:54:53.176688 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.176705 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:53.176722 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:53.176738 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:53.176755 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:53.176773 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:53.176790 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:53.176806 | orchestrator | 2025-09-29 05:54:53.176824 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-29 05:54:53.176841 | orchestrator | Monday 29 September 2025 05:54:37 +0000 (0:00:00.815) 0:06:26.837 ****** 2025-09-29 05:54:53.176858 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:53.176874 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:53.176915 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:53.176931 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:53.176947 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:53.176964 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:53.176981 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:53.176998 | orchestrator | 2025-09-29 05:54:53.177014 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-29 05:54:53.177032 | orchestrator | Monday 29 September 2025 05:54:38 +0000 (0:00:00.508) 0:06:27.346 ****** 2025-09-29 05:54:53.177050 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:53.177067 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.177082 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:53.177099 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:53.177116 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:54:53.177134 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:54:53.177151 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:54:53.177167 | orchestrator | 2025-09-29 05:54:53.177185 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-29 05:54:53.177203 | orchestrator | Monday 29 September 2025 05:54:39 +0000 (0:00:01.557) 0:06:28.903 ****** 2025-09-29 05:54:53.177220 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:54:53.177236 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:54:53.177253 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:54:53.177271 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:54:53.177287 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:54:53.177303 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:54:53.177320 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:54:53.177337 | orchestrator | 2025-09-29 05:54:53.177353 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-29 05:54:53.177370 | orchestrator | Monday 29 September 2025 05:54:40 +0000 (0:00:00.514) 0:06:29.418 ****** 2025-09-29 05:54:53.177387 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.177405 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.177421 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.177449 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.177466 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.177479 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.177493 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.177507 | orchestrator | 2025-09-29 05:54:53.177520 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-29 05:54:53.177534 | orchestrator | Monday 29 September 2025 05:54:47 +0000 (0:00:07.474) 0:06:36.892 ****** 2025-09-29 05:54:53.177547 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.177560 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.177574 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.177587 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.177599 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.177612 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.177626 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.177640 | orchestrator | 2025-09-29 05:54:53.177654 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-29 05:54:53.177667 | orchestrator | Monday 29 September 2025 05:54:48 +0000 (0:00:01.200) 0:06:38.093 ****** 2025-09-29 05:54:53.177681 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.177694 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.177707 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.177721 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.177735 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.177748 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.177760 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.177772 | orchestrator | 2025-09-29 05:54:53.177786 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-29 05:54:53.177801 | orchestrator | Monday 29 September 2025 05:54:50 +0000 (0:00:01.845) 0:06:39.938 ****** 2025-09-29 05:54:53.177815 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.177829 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:54:53.177844 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:54:53.177858 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:54:53.177873 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:54:53.177961 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:54:53.177978 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:54:53.177993 | orchestrator | 2025-09-29 05:54:53.178008 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-29 05:54:53.178093 | orchestrator | Monday 29 September 2025 05:54:52 +0000 (0:00:01.749) 0:06:41.687 ****** 2025-09-29 05:54:53.178117 | orchestrator | ok: [testbed-manager] 2025-09-29 05:54:53.178169 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:54:53.178183 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:54:53.178198 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:54:53.178227 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.516185 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.516302 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.516318 | orchestrator | 2025-09-29 05:55:22.516331 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-29 05:55:22.516344 | orchestrator | Monday 29 September 2025 05:54:53 +0000 (0:00:00.818) 0:06:42.506 ****** 2025-09-29 05:55:22.516355 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:55:22.516367 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:55:22.516378 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:55:22.516389 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:55:22.516400 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:55:22.516410 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:55:22.516421 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:55:22.516432 | orchestrator | 2025-09-29 05:55:22.516443 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-29 05:55:22.516454 | orchestrator | Monday 29 September 2025 05:54:54 +0000 (0:00:00.959) 0:06:43.465 ****** 2025-09-29 05:55:22.516465 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:55:22.516501 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:55:22.516512 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:55:22.516523 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:55:22.516534 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:55:22.516545 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:55:22.516555 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:55:22.516566 | orchestrator | 2025-09-29 05:55:22.516577 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-29 05:55:22.516588 | orchestrator | Monday 29 September 2025 05:54:54 +0000 (0:00:00.536) 0:06:44.002 ****** 2025-09-29 05:55:22.516599 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.516610 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.516621 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.516632 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.516642 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.516653 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.516664 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.516674 | orchestrator | 2025-09-29 05:55:22.516685 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-29 05:55:22.516696 | orchestrator | Monday 29 September 2025 05:54:55 +0000 (0:00:00.509) 0:06:44.512 ****** 2025-09-29 05:55:22.516707 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.516720 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.516733 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.516745 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.516758 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.516770 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.516782 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.516795 | orchestrator | 2025-09-29 05:55:22.516807 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-29 05:55:22.516819 | orchestrator | Monday 29 September 2025 05:54:55 +0000 (0:00:00.494) 0:06:45.007 ****** 2025-09-29 05:55:22.516831 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.516844 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.516856 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.516868 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.516911 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.516931 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.516951 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.516968 | orchestrator | 2025-09-29 05:55:22.516987 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-29 05:55:22.516999 | orchestrator | Monday 29 September 2025 05:54:56 +0000 (0:00:00.506) 0:06:45.513 ****** 2025-09-29 05:55:22.517009 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.517020 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.517031 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.517041 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.517052 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.517062 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.517073 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.517084 | orchestrator | 2025-09-29 05:55:22.517094 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-29 05:55:22.517105 | orchestrator | Monday 29 September 2025 05:55:01 +0000 (0:00:05.670) 0:06:51.184 ****** 2025-09-29 05:55:22.517116 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:55:22.517127 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:55:22.517137 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:55:22.517149 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:55:22.517160 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:55:22.517170 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:55:22.517181 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:55:22.517192 | orchestrator | 2025-09-29 05:55:22.517203 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-29 05:55:22.517214 | orchestrator | Monday 29 September 2025 05:55:02 +0000 (0:00:00.456) 0:06:51.641 ****** 2025-09-29 05:55:22.517236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:22.517250 | orchestrator | 2025-09-29 05:55:22.517261 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-29 05:55:22.517272 | orchestrator | Monday 29 September 2025 05:55:03 +0000 (0:00:00.708) 0:06:52.349 ****** 2025-09-29 05:55:22.517283 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.517294 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.517304 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.517315 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.517326 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.517336 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.517347 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.517357 | orchestrator | 2025-09-29 05:55:22.517368 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-29 05:55:22.517379 | orchestrator | Monday 29 September 2025 05:55:04 +0000 (0:00:01.824) 0:06:54.174 ****** 2025-09-29 05:55:22.517390 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.517401 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.517412 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.517422 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.517433 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.517443 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.517454 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.517465 | orchestrator | 2025-09-29 05:55:22.517508 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-29 05:55:22.517520 | orchestrator | Monday 29 September 2025 05:55:05 +0000 (0:00:00.987) 0:06:55.161 ****** 2025-09-29 05:55:22.517531 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.517541 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.517552 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.517562 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.517573 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.517583 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.517593 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.517604 | orchestrator | 2025-09-29 05:55:22.517615 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-29 05:55:22.517625 | orchestrator | Monday 29 September 2025 05:55:06 +0000 (0:00:00.721) 0:06:55.883 ****** 2025-09-29 05:55:22.517649 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517662 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517673 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517683 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517694 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517705 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517715 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-29 05:55:22.517726 | orchestrator | 2025-09-29 05:55:22.517737 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-29 05:55:22.517747 | orchestrator | Monday 29 September 2025 05:55:08 +0000 (0:00:01.510) 0:06:57.394 ****** 2025-09-29 05:55:22.517766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:22.517777 | orchestrator | 2025-09-29 05:55:22.517788 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-29 05:55:22.517799 | orchestrator | Monday 29 September 2025 05:55:08 +0000 (0:00:00.816) 0:06:58.210 ****** 2025-09-29 05:55:22.517809 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:22.517820 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:22.517831 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:22.517842 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:22.517853 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:22.517863 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:22.517874 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:22.517927 | orchestrator | 2025-09-29 05:55:22.517939 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-29 05:55:22.517950 | orchestrator | Monday 29 September 2025 05:55:17 +0000 (0:00:08.532) 0:07:06.742 ****** 2025-09-29 05:55:22.517961 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:22.517972 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.517983 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.517993 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.518004 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.518095 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.518110 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.518120 | orchestrator | 2025-09-29 05:55:22.518131 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-29 05:55:22.518142 | orchestrator | Monday 29 September 2025 05:55:19 +0000 (0:00:01.694) 0:07:08.436 ****** 2025-09-29 05:55:22.518153 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:22.518163 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:22.518174 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:22.518184 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:22.518195 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:22.518205 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:22.518216 | orchestrator | 2025-09-29 05:55:22.518227 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-29 05:55:22.518238 | orchestrator | Monday 29 September 2025 05:55:20 +0000 (0:00:01.208) 0:07:09.645 ****** 2025-09-29 05:55:22.518248 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:22.518259 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:22.518270 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:22.518281 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:22.518292 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:22.518302 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:22.518312 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:22.518323 | orchestrator | 2025-09-29 05:55:22.518334 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-29 05:55:22.518345 | orchestrator | 2025-09-29 05:55:22.518356 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-29 05:55:22.518366 | orchestrator | Monday 29 September 2025 05:55:22 +0000 (0:00:01.766) 0:07:11.412 ****** 2025-09-29 05:55:22.518377 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:55:22.518387 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:55:22.518405 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:55:22.518417 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:55:22.518427 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:55:22.518438 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:55:22.518459 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:55:46.629232 | orchestrator | 2025-09-29 05:55:46.629342 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-29 05:55:46.629360 | orchestrator | 2025-09-29 05:55:46.629372 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-29 05:55:46.629411 | orchestrator | Monday 29 September 2025 05:55:22 +0000 (0:00:00.436) 0:07:11.849 ****** 2025-09-29 05:55:46.629423 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.629435 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.629445 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.629456 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.629467 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.629477 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.629488 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.629498 | orchestrator | 2025-09-29 05:55:46.629509 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-29 05:55:46.629520 | orchestrator | Monday 29 September 2025 05:55:23 +0000 (0:00:01.278) 0:07:13.127 ****** 2025-09-29 05:55:46.629530 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:46.629542 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:46.629553 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:46.629563 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:46.629574 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:46.629584 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:46.629595 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:46.629605 | orchestrator | 2025-09-29 05:55:46.629616 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-29 05:55:46.629627 | orchestrator | Monday 29 September 2025 05:55:25 +0000 (0:00:01.285) 0:07:14.412 ****** 2025-09-29 05:55:46.629638 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:55:46.629648 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:55:46.629659 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:55:46.629669 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:55:46.629680 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:55:46.629691 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:55:46.629702 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:55:46.629714 | orchestrator | 2025-09-29 05:55:46.629727 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-29 05:55:46.629740 | orchestrator | Monday 29 September 2025 05:55:25 +0000 (0:00:00.408) 0:07:14.820 ****** 2025-09-29 05:55:46.629753 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:46.629767 | orchestrator | 2025-09-29 05:55:46.629780 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-29 05:55:46.629793 | orchestrator | Monday 29 September 2025 05:55:26 +0000 (0:00:00.809) 0:07:15.630 ****** 2025-09-29 05:55:46.629806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:46.629821 | orchestrator | 2025-09-29 05:55:46.629833 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-29 05:55:46.629846 | orchestrator | Monday 29 September 2025 05:55:26 +0000 (0:00:00.707) 0:07:16.337 ****** 2025-09-29 05:55:46.629858 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.629900 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.629913 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.629925 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.629938 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.629950 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.629962 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.629982 | orchestrator | 2025-09-29 05:55:46.630001 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-29 05:55:46.630090 | orchestrator | Monday 29 September 2025 05:55:35 +0000 (0:00:08.188) 0:07:24.526 ****** 2025-09-29 05:55:46.630111 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.630132 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.630165 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.630185 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.630244 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.630265 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.630277 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.630288 | orchestrator | 2025-09-29 05:55:46.630299 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-29 05:55:46.630310 | orchestrator | Monday 29 September 2025 05:55:35 +0000 (0:00:00.723) 0:07:25.250 ****** 2025-09-29 05:55:46.630321 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.630331 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.630343 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.630353 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.630364 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.630374 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.630385 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.630395 | orchestrator | 2025-09-29 05:55:46.630406 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-29 05:55:46.630417 | orchestrator | Monday 29 September 2025 05:55:37 +0000 (0:00:01.358) 0:07:26.608 ****** 2025-09-29 05:55:46.630428 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.630439 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.630449 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.630460 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.630470 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.630481 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.630491 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.630502 | orchestrator | 2025-09-29 05:55:46.630513 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-29 05:55:46.630524 | orchestrator | Monday 29 September 2025 05:55:38 +0000 (0:00:01.561) 0:07:28.170 ****** 2025-09-29 05:55:46.630549 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.630560 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.630571 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.630581 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.630611 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.630622 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.630633 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.630644 | orchestrator | 2025-09-29 05:55:46.630654 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-29 05:55:46.630665 | orchestrator | Monday 29 September 2025 05:55:39 +0000 (0:00:01.102) 0:07:29.272 ****** 2025-09-29 05:55:46.630676 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.630686 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.630697 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.630707 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.630718 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.630728 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.630739 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.630750 | orchestrator | 2025-09-29 05:55:46.630760 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-29 05:55:46.630771 | orchestrator | 2025-09-29 05:55:46.630782 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-29 05:55:46.630792 | orchestrator | Monday 29 September 2025 05:55:41 +0000 (0:00:01.143) 0:07:30.416 ****** 2025-09-29 05:55:46.630803 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:46.630815 | orchestrator | 2025-09-29 05:55:46.630825 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-29 05:55:46.630836 | orchestrator | Monday 29 September 2025 05:55:41 +0000 (0:00:00.781) 0:07:31.198 ****** 2025-09-29 05:55:46.630847 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:46.630858 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:46.630909 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:46.630922 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:46.630933 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:46.630943 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:46.630954 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:46.630964 | orchestrator | 2025-09-29 05:55:46.630975 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-29 05:55:46.630986 | orchestrator | Monday 29 September 2025 05:55:42 +0000 (0:00:00.794) 0:07:31.992 ****** 2025-09-29 05:55:46.630997 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.631007 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.631018 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.631029 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.631039 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.631050 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.631060 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.631071 | orchestrator | 2025-09-29 05:55:46.631082 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-29 05:55:46.631093 | orchestrator | Monday 29 September 2025 05:55:43 +0000 (0:00:01.248) 0:07:33.241 ****** 2025-09-29 05:55:46.631104 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 05:55:46.631115 | orchestrator | 2025-09-29 05:55:46.631125 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-29 05:55:46.631136 | orchestrator | Monday 29 September 2025 05:55:44 +0000 (0:00:00.689) 0:07:33.931 ****** 2025-09-29 05:55:46.631147 | orchestrator | ok: [testbed-manager] 2025-09-29 05:55:46.631157 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:55:46.631168 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:55:46.631178 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:55:46.631189 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:55:46.631199 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:55:46.631210 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:55:46.631220 | orchestrator | 2025-09-29 05:55:46.631231 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-29 05:55:46.631241 | orchestrator | Monday 29 September 2025 05:55:45 +0000 (0:00:00.759) 0:07:34.691 ****** 2025-09-29 05:55:46.631252 | orchestrator | changed: [testbed-manager] 2025-09-29 05:55:46.631263 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:55:46.631273 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:55:46.631284 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:55:46.631295 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:55:46.631305 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:55:46.631316 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:55:46.631326 | orchestrator | 2025-09-29 05:55:46.631337 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:55:46.631349 | orchestrator | testbed-manager : ok=164  changed=38  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-09-29 05:55:46.631361 | orchestrator | testbed-node-0 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-29 05:55:46.631372 | orchestrator | testbed-node-1 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-29 05:55:46.631383 | orchestrator | testbed-node-2 : ok=173  changed=67  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-29 05:55:46.631394 | orchestrator | testbed-node-3 : ok=171  changed=63  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-09-29 05:55:46.631404 | orchestrator | testbed-node-4 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-29 05:55:46.631427 | orchestrator | testbed-node-5 : ok=171  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-29 05:55:46.631438 | orchestrator | 2025-09-29 05:55:46.631449 | orchestrator | 2025-09-29 05:55:46.631467 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:55:47.100534 | orchestrator | Monday 29 September 2025 05:55:46 +0000 (0:00:01.254) 0:07:35.945 ****** 2025-09-29 05:55:47.100637 | orchestrator | =============================================================================== 2025-09-29 05:55:47.100652 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.18s 2025-09-29 05:55:47.100663 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.67s 2025-09-29 05:55:47.100675 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 32.98s 2025-09-29 05:55:47.100686 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.15s 2025-09-29 05:55:47.100697 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.94s 2025-09-29 05:55:47.100709 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.86s 2025-09-29 05:55:47.100720 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.19s 2025-09-29 05:55:47.100731 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.40s 2025-09-29 05:55:47.100742 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.53s 2025-09-29 05:55:47.100753 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.38s 2025-09-29 05:55:47.100764 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.19s 2025-09-29 05:55:47.100775 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.86s 2025-09-29 05:55:47.100786 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.54s 2025-09-29 05:55:47.100797 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.49s 2025-09-29 05:55:47.100808 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.47s 2025-09-29 05:55:47.100820 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.03s 2025-09-29 05:55:47.100830 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.82s 2025-09-29 05:55:47.100841 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.78s 2025-09-29 05:55:47.100852 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.71s 2025-09-29 05:55:47.100864 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.67s 2025-09-29 05:55:47.387668 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-29 05:55:47.387756 | orchestrator | + osism apply network 2025-09-29 05:56:00.102653 | orchestrator | 2025-09-29 05:56:00 | INFO  | Task 8f0c4e82-a101-4f2e-8fa4-b3de8c52ce50 (network) was prepared for execution. 2025-09-29 05:56:00.102768 | orchestrator | 2025-09-29 05:56:00 | INFO  | It takes a moment until task 8f0c4e82-a101-4f2e-8fa4-b3de8c52ce50 (network) has been started and output is visible here. 2025-09-29 05:56:27.309545 | orchestrator | 2025-09-29 05:56:27.309658 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-29 05:56:27.309674 | orchestrator | 2025-09-29 05:56:27.309686 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-29 05:56:27.309697 | orchestrator | Monday 29 September 2025 05:56:04 +0000 (0:00:00.248) 0:00:00.248 ****** 2025-09-29 05:56:27.309707 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.309717 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.309728 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.309737 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.309747 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.309756 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.309766 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.309799 | orchestrator | 2025-09-29 05:56:27.309809 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-29 05:56:27.309819 | orchestrator | Monday 29 September 2025 05:56:04 +0000 (0:00:00.602) 0:00:00.850 ****** 2025-09-29 05:56:27.309831 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:56:27.309843 | orchestrator | 2025-09-29 05:56:27.309853 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-29 05:56:27.309863 | orchestrator | Monday 29 September 2025 05:56:05 +0000 (0:00:01.049) 0:00:01.900 ****** 2025-09-29 05:56:27.309918 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.309928 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.309937 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.309947 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.309956 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.309965 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.309975 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.309984 | orchestrator | 2025-09-29 05:56:27.309994 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-29 05:56:27.310004 | orchestrator | Monday 29 September 2025 05:56:07 +0000 (0:00:01.929) 0:00:03.829 ****** 2025-09-29 05:56:27.310013 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.310130 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.310142 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.310153 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.310164 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.310176 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.310187 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.310198 | orchestrator | 2025-09-29 05:56:27.310223 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-29 05:56:27.310235 | orchestrator | Monday 29 September 2025 05:56:09 +0000 (0:00:01.751) 0:00:05.580 ****** 2025-09-29 05:56:27.310246 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-29 05:56:27.310258 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-29 05:56:27.310269 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-29 05:56:27.310280 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-29 05:56:27.310293 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-29 05:56:27.310302 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-29 05:56:27.310312 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-29 05:56:27.310322 | orchestrator | 2025-09-29 05:56:27.310332 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-29 05:56:27.310342 | orchestrator | Monday 29 September 2025 05:56:10 +0000 (0:00:00.979) 0:00:06.560 ****** 2025-09-29 05:56:27.310351 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-29 05:56:27.310362 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 05:56:27.310371 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 05:56:27.310381 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-29 05:56:27.310390 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 05:56:27.310400 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-29 05:56:27.310409 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-29 05:56:27.310419 | orchestrator | 2025-09-29 05:56:27.310428 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-29 05:56:27.310438 | orchestrator | Monday 29 September 2025 05:56:13 +0000 (0:00:03.323) 0:00:09.883 ****** 2025-09-29 05:56:27.310447 | orchestrator | changed: [testbed-manager] 2025-09-29 05:56:27.310457 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:56:27.310467 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:56:27.310476 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:56:27.310486 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:56:27.310505 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:56:27.310514 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:56:27.310524 | orchestrator | 2025-09-29 05:56:27.310533 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-29 05:56:27.310543 | orchestrator | Monday 29 September 2025 05:56:15 +0000 (0:00:01.481) 0:00:11.365 ****** 2025-09-29 05:56:27.310553 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 05:56:27.310562 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 05:56:27.310571 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-29 05:56:27.310581 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-29 05:56:27.310591 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 05:56:27.310600 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-29 05:56:27.310610 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-29 05:56:27.310619 | orchestrator | 2025-09-29 05:56:27.310628 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-29 05:56:27.310638 | orchestrator | Monday 29 September 2025 05:56:17 +0000 (0:00:01.945) 0:00:13.311 ****** 2025-09-29 05:56:27.310647 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.310657 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.310666 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.310676 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.310685 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.310694 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.310704 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.310713 | orchestrator | 2025-09-29 05:56:27.310723 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-29 05:56:27.310751 | orchestrator | Monday 29 September 2025 05:56:18 +0000 (0:00:00.987) 0:00:14.299 ****** 2025-09-29 05:56:27.310761 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:56:27.310771 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:27.310780 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:27.310790 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:27.310799 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:27.310809 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:27.310818 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:27.310828 | orchestrator | 2025-09-29 05:56:27.310838 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-29 05:56:27.310848 | orchestrator | Monday 29 September 2025 05:56:18 +0000 (0:00:00.584) 0:00:14.883 ****** 2025-09-29 05:56:27.310858 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.310887 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.310897 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.310906 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.310916 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.310925 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.310935 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.310944 | orchestrator | 2025-09-29 05:56:27.310954 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-29 05:56:27.310964 | orchestrator | Monday 29 September 2025 05:56:20 +0000 (0:00:02.230) 0:00:17.114 ****** 2025-09-29 05:56:27.310973 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:27.310983 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:27.310992 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:27.311002 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:27.311011 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:27.311021 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:27.311032 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-29 05:56:27.311042 | orchestrator | 2025-09-29 05:56:27.311052 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-29 05:56:27.311061 | orchestrator | Monday 29 September 2025 05:56:21 +0000 (0:00:00.787) 0:00:17.901 ****** 2025-09-29 05:56:27.311071 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.311088 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:56:27.311097 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:56:27.311107 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:56:27.311116 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:56:27.311126 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:56:27.311135 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:56:27.311144 | orchestrator | 2025-09-29 05:56:27.311154 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-29 05:56:27.311164 | orchestrator | Monday 29 September 2025 05:56:23 +0000 (0:00:01.468) 0:00:19.370 ****** 2025-09-29 05:56:27.311174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:56:27.311186 | orchestrator | 2025-09-29 05:56:27.311196 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-29 05:56:27.311206 | orchestrator | Monday 29 September 2025 05:56:24 +0000 (0:00:01.132) 0:00:20.502 ****** 2025-09-29 05:56:27.311215 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.311225 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.311234 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.311244 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.311253 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.311263 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.311272 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.311282 | orchestrator | 2025-09-29 05:56:27.311291 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-29 05:56:27.311301 | orchestrator | Monday 29 September 2025 05:56:25 +0000 (0:00:00.914) 0:00:21.416 ****** 2025-09-29 05:56:27.311310 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:27.311320 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:27.311329 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:27.311339 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:27.311348 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:27.311357 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:27.311367 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:27.311376 | orchestrator | 2025-09-29 05:56:27.311386 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-29 05:56:27.311395 | orchestrator | Monday 29 September 2025 05:56:26 +0000 (0:00:00.832) 0:00:22.249 ****** 2025-09-29 05:56:27.311405 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311414 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311424 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311434 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311443 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311453 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311462 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311471 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311481 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311491 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-29 05:56:27.311500 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311509 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311519 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311529 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-29 05:56:27.311538 | orchestrator | 2025-09-29 05:56:27.311554 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-29 05:56:43.891046 | orchestrator | Monday 29 September 2025 05:56:27 +0000 (0:00:01.169) 0:00:23.419 ****** 2025-09-29 05:56:43.891176 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:56:43.891202 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:43.891222 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:43.891240 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:43.891259 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:43.891276 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:43.891295 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:43.891313 | orchestrator | 2025-09-29 05:56:43.891334 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-29 05:56:43.891352 | orchestrator | Monday 29 September 2025 05:56:27 +0000 (0:00:00.685) 0:00:24.104 ****** 2025-09-29 05:56:43.891373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3 2025-09-29 05:56:43.891394 | orchestrator | 2025-09-29 05:56:43.891412 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-29 05:56:43.891430 | orchestrator | Monday 29 September 2025 05:56:32 +0000 (0:00:04.758) 0:00:28.863 ****** 2025-09-29 05:56:43.891449 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891487 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891615 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891633 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891770 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891792 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.891832 | orchestrator | 2025-09-29 05:56:43.891851 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-29 05:56:43.891907 | orchestrator | Monday 29 September 2025 05:56:38 +0000 (0:00:05.626) 0:00:34.490 ****** 2025-09-29 05:56:43.891930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891949 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.891994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.892005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.892016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.892027 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.892051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.892062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-29 05:56:43.892073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.892084 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:43.892106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:49.518766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-29 05:56:49.518927 | orchestrator | 2025-09-29 05:56:49.518946 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-29 05:56:49.518959 | orchestrator | Monday 29 September 2025 05:56:43 +0000 (0:00:05.507) 0:00:39.997 ****** 2025-09-29 05:56:49.518973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:56:49.518984 | orchestrator | 2025-09-29 05:56:49.518996 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-29 05:56:49.519007 | orchestrator | Monday 29 September 2025 05:56:45 +0000 (0:00:01.249) 0:00:41.247 ****** 2025-09-29 05:56:49.519018 | orchestrator | ok: [testbed-manager] 2025-09-29 05:56:49.519030 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:56:49.519041 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:56:49.519051 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:56:49.519061 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:56:49.519072 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:56:49.519082 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:56:49.519093 | orchestrator | 2025-09-29 05:56:49.519104 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-29 05:56:49.519114 | orchestrator | Monday 29 September 2025 05:56:46 +0000 (0:00:01.213) 0:00:42.460 ****** 2025-09-29 05:56:49.519125 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519137 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519147 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519174 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519185 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519196 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519207 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519241 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:56:49.519253 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519264 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519275 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519285 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519296 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519309 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:49.519321 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519334 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519346 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519359 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519371 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:49.519384 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519397 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519408 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519418 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519429 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:49.519440 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519450 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519461 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519472 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519482 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:49.519493 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:49.519504 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-29 05:56:49.519514 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-29 05:56:49.519525 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-29 05:56:49.519536 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-29 05:56:49.519547 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:49.519558 | orchestrator | 2025-09-29 05:56:49.519568 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-29 05:56:49.519597 | orchestrator | Monday 29 September 2025 05:56:48 +0000 (0:00:01.793) 0:00:44.253 ****** 2025-09-29 05:56:49.519608 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:56:49.519619 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:49.519630 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:49.519641 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:49.519651 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:49.519662 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:49.519673 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:49.519683 | orchestrator | 2025-09-29 05:56:49.519694 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-29 05:56:49.519705 | orchestrator | Monday 29 September 2025 05:56:48 +0000 (0:00:00.560) 0:00:44.813 ****** 2025-09-29 05:56:49.519716 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:56:49.519727 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:56:49.519737 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:56:49.519759 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:56:49.519770 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:56:49.519781 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:56:49.519791 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:56:49.519802 | orchestrator | 2025-09-29 05:56:49.519813 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:56:49.519824 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 05:56:49.519836 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519847 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519858 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519907 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519918 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519929 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 05:56:49.519940 | orchestrator | 2025-09-29 05:56:49.519952 | orchestrator | 2025-09-29 05:56:49.519963 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:56:49.519974 | orchestrator | Monday 29 September 2025 05:56:49 +0000 (0:00:00.599) 0:00:45.413 ****** 2025-09-29 05:56:49.519985 | orchestrator | =============================================================================== 2025-09-29 05:56:49.519996 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.63s 2025-09-29 05:56:49.520007 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.51s 2025-09-29 05:56:49.520018 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.76s 2025-09-29 05:56:49.520029 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2025-09-29 05:56:49.520039 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.23s 2025-09-29 05:56:49.520050 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2025-09-29 05:56:49.520061 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.93s 2025-09-29 05:56:49.520072 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.79s 2025-09-29 05:56:49.520083 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.75s 2025-09-29 05:56:49.520093 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-09-29 05:56:49.520104 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.47s 2025-09-29 05:56:49.520115 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.25s 2025-09-29 05:56:49.520126 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2025-09-29 05:56:49.520137 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.17s 2025-09-29 05:56:49.520147 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.13s 2025-09-29 05:56:49.520158 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.05s 2025-09-29 05:56:49.520169 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.99s 2025-09-29 05:56:49.520180 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2025-09-29 05:56:49.520198 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.91s 2025-09-29 05:56:49.520209 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.83s 2025-09-29 05:56:49.712588 | orchestrator | + osism apply wireguard 2025-09-29 05:57:01.513678 | orchestrator | 2025-09-29 05:57:01 | INFO  | Task 7c4660f2-ad62-48f6-80e8-d84ee6919b33 (wireguard) was prepared for execution. 2025-09-29 05:57:01.513794 | orchestrator | 2025-09-29 05:57:01 | INFO  | It takes a moment until task 7c4660f2-ad62-48f6-80e8-d84ee6919b33 (wireguard) has been started and output is visible here. 2025-09-29 05:57:19.228422 | orchestrator | 2025-09-29 05:57:19.228538 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-29 05:57:19.228557 | orchestrator | 2025-09-29 05:57:19.228570 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-29 05:57:19.228582 | orchestrator | Monday 29 September 2025 05:57:05 +0000 (0:00:00.207) 0:00:00.208 ****** 2025-09-29 05:57:19.228594 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:19.228614 | orchestrator | 2025-09-29 05:57:19.228633 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-29 05:57:19.228653 | orchestrator | Monday 29 September 2025 05:57:06 +0000 (0:00:01.247) 0:00:01.455 ****** 2025-09-29 05:57:19.228670 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.228689 | orchestrator | 2025-09-29 05:57:19.228707 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-29 05:57:19.228724 | orchestrator | Monday 29 September 2025 05:57:12 +0000 (0:00:06.000) 0:00:07.456 ****** 2025-09-29 05:57:19.228740 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.228758 | orchestrator | 2025-09-29 05:57:19.228777 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-29 05:57:19.228795 | orchestrator | Monday 29 September 2025 05:57:12 +0000 (0:00:00.513) 0:00:07.969 ****** 2025-09-29 05:57:19.228815 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.228835 | orchestrator | 2025-09-29 05:57:19.228854 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-29 05:57:19.228923 | orchestrator | Monday 29 September 2025 05:57:13 +0000 (0:00:00.379) 0:00:08.349 ****** 2025-09-29 05:57:19.228935 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:19.228946 | orchestrator | 2025-09-29 05:57:19.228957 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-29 05:57:19.228971 | orchestrator | Monday 29 September 2025 05:57:13 +0000 (0:00:00.464) 0:00:08.813 ****** 2025-09-29 05:57:19.228985 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:19.228998 | orchestrator | 2025-09-29 05:57:19.229010 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-29 05:57:19.229023 | orchestrator | Monday 29 September 2025 05:57:14 +0000 (0:00:00.473) 0:00:09.287 ****** 2025-09-29 05:57:19.229035 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:19.229048 | orchestrator | 2025-09-29 05:57:19.229079 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-29 05:57:19.229092 | orchestrator | Monday 29 September 2025 05:57:14 +0000 (0:00:00.369) 0:00:09.656 ****** 2025-09-29 05:57:19.229104 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.229116 | orchestrator | 2025-09-29 05:57:19.229128 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-29 05:57:19.229141 | orchestrator | Monday 29 September 2025 05:57:15 +0000 (0:00:01.115) 0:00:10.772 ****** 2025-09-29 05:57:19.229153 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-29 05:57:19.229166 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.229178 | orchestrator | 2025-09-29 05:57:19.229191 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-29 05:57:19.229203 | orchestrator | Monday 29 September 2025 05:57:16 +0000 (0:00:00.860) 0:00:11.633 ****** 2025-09-29 05:57:19.229215 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.229227 | orchestrator | 2025-09-29 05:57:19.229241 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-29 05:57:19.229278 | orchestrator | Monday 29 September 2025 05:57:18 +0000 (0:00:01.513) 0:00:13.147 ****** 2025-09-29 05:57:19.229291 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:19.229304 | orchestrator | 2025-09-29 05:57:19.229316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:57:19.229327 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:57:19.229339 | orchestrator | 2025-09-29 05:57:19.229350 | orchestrator | 2025-09-29 05:57:19.229361 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:57:19.229372 | orchestrator | Monday 29 September 2025 05:57:19 +0000 (0:00:00.858) 0:00:14.005 ****** 2025-09-29 05:57:19.229383 | orchestrator | =============================================================================== 2025-09-29 05:57:19.229393 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.00s 2025-09-29 05:57:19.229404 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.51s 2025-09-29 05:57:19.229415 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.25s 2025-09-29 05:57:19.229426 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.12s 2025-09-29 05:57:19.229436 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.86s 2025-09-29 05:57:19.229447 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.86s 2025-09-29 05:57:19.229457 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.51s 2025-09-29 05:57:19.229468 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.47s 2025-09-29 05:57:19.229479 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.46s 2025-09-29 05:57:19.229489 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2025-09-29 05:57:19.229500 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2025-09-29 05:57:19.411944 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-29 05:57:19.437975 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-29 05:57:19.438069 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-29 05:57:19.519141 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 173 0 --:--:-- --:--:-- --:--:-- 175 2025-09-29 05:57:19.533992 | orchestrator | + osism apply --environment custom workarounds 2025-09-29 05:57:21.172649 | orchestrator | 2025-09-29 05:57:21 | INFO  | Trying to run play workarounds in environment custom 2025-09-29 05:57:31.325600 | orchestrator | 2025-09-29 05:57:31 | INFO  | Task d3fa22d3-0735-4c96-a10b-282ba7d3d40e (workarounds) was prepared for execution. 2025-09-29 05:57:31.325718 | orchestrator | 2025-09-29 05:57:31 | INFO  | It takes a moment until task d3fa22d3-0735-4c96-a10b-282ba7d3d40e (workarounds) has been started and output is visible here. 2025-09-29 05:57:55.879222 | orchestrator | 2025-09-29 05:57:55.879333 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 05:57:55.879352 | orchestrator | 2025-09-29 05:57:55.879364 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-29 05:57:55.879375 | orchestrator | Monday 29 September 2025 05:57:35 +0000 (0:00:00.147) 0:00:00.147 ****** 2025-09-29 05:57:55.879386 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879398 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879409 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879420 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879430 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879465 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879476 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-29 05:57:55.879487 | orchestrator | 2025-09-29 05:57:55.879497 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-29 05:57:55.879508 | orchestrator | 2025-09-29 05:57:55.879518 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-29 05:57:55.879529 | orchestrator | Monday 29 September 2025 05:57:36 +0000 (0:00:00.777) 0:00:00.924 ****** 2025-09-29 05:57:55.879540 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:55.879552 | orchestrator | 2025-09-29 05:57:55.879577 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-29 05:57:55.879595 | orchestrator | 2025-09-29 05:57:55.879614 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-29 05:57:55.879634 | orchestrator | Monday 29 September 2025 05:57:38 +0000 (0:00:02.532) 0:00:03.457 ****** 2025-09-29 05:57:55.879652 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:57:55.879670 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:57:55.879688 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:57:55.879706 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:57:55.879725 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:57:55.879746 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:57:55.879765 | orchestrator | 2025-09-29 05:57:55.879783 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-29 05:57:55.879796 | orchestrator | 2025-09-29 05:57:55.879809 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-29 05:57:55.879822 | orchestrator | Monday 29 September 2025 05:57:40 +0000 (0:00:01.795) 0:00:05.253 ****** 2025-09-29 05:57:55.879836 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879851 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879911 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879923 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879935 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879948 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-29 05:57:55.879960 | orchestrator | 2025-09-29 05:57:55.879974 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-29 05:57:55.879986 | orchestrator | Monday 29 September 2025 05:57:41 +0000 (0:00:01.406) 0:00:06.659 ****** 2025-09-29 05:57:55.880000 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:57:55.880012 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:57:55.880025 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:57:55.880037 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:57:55.880050 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:57:55.880062 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:57:55.880074 | orchestrator | 2025-09-29 05:57:55.880087 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-29 05:57:55.880099 | orchestrator | Monday 29 September 2025 05:57:45 +0000 (0:00:03.631) 0:00:10.291 ****** 2025-09-29 05:57:55.880111 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:57:55.880122 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:57:55.880133 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:57:55.880143 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:57:55.880154 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:57:55.880165 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:57:55.880175 | orchestrator | 2025-09-29 05:57:55.880186 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-29 05:57:55.880210 | orchestrator | 2025-09-29 05:57:55.880221 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-29 05:57:55.880232 | orchestrator | Monday 29 September 2025 05:57:46 +0000 (0:00:00.649) 0:00:10.941 ****** 2025-09-29 05:57:55.880243 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:55.880254 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:57:55.880264 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:57:55.880275 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:57:55.880286 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:57:55.880297 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:57:55.880308 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:57:55.880318 | orchestrator | 2025-09-29 05:57:55.880329 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-29 05:57:55.880340 | orchestrator | Monday 29 September 2025 05:57:47 +0000 (0:00:01.556) 0:00:12.497 ****** 2025-09-29 05:57:55.880350 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:55.880361 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:57:55.880371 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:57:55.880382 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:57:55.880393 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:57:55.880403 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:57:55.880433 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:57:55.880445 | orchestrator | 2025-09-29 05:57:55.880456 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-29 05:57:55.880467 | orchestrator | Monday 29 September 2025 05:57:49 +0000 (0:00:01.584) 0:00:14.082 ****** 2025-09-29 05:57:55.880478 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:57:55.880488 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:57:55.880499 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:57:55.880510 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:57:55.880521 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:55.880531 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:57:55.880542 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:57:55.880553 | orchestrator | 2025-09-29 05:57:55.880564 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-29 05:57:55.880575 | orchestrator | Monday 29 September 2025 05:57:50 +0000 (0:00:01.476) 0:00:15.558 ****** 2025-09-29 05:57:55.880586 | orchestrator | changed: [testbed-manager] 2025-09-29 05:57:55.880597 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:57:55.880608 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:57:55.880618 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:57:55.880629 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:57:55.880640 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:57:55.880651 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:57:55.880661 | orchestrator | 2025-09-29 05:57:55.880672 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-29 05:57:55.880683 | orchestrator | Monday 29 September 2025 05:57:52 +0000 (0:00:01.716) 0:00:17.275 ****** 2025-09-29 05:57:55.880694 | orchestrator | skipping: [testbed-manager] 2025-09-29 05:57:55.880705 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:57:55.880716 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:57:55.880730 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:57:55.880748 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:57:55.880767 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:57:55.880785 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:57:55.880802 | orchestrator | 2025-09-29 05:57:55.880828 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-29 05:57:55.880850 | orchestrator | 2025-09-29 05:57:55.880901 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-29 05:57:55.880926 | orchestrator | Monday 29 September 2025 05:57:53 +0000 (0:00:00.588) 0:00:17.864 ****** 2025-09-29 05:57:55.880944 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:57:55.880962 | orchestrator | ok: [testbed-manager] 2025-09-29 05:57:55.880994 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:57:55.881016 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:57:55.881033 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:57:55.881044 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:57:55.881055 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:57:55.881065 | orchestrator | 2025-09-29 05:57:55.881077 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:57:55.881089 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:57:55.881101 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881112 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881123 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881133 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881144 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881155 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:57:55.881166 | orchestrator | 2025-09-29 05:57:55.881177 | orchestrator | 2025-09-29 05:57:55.881188 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:57:55.881199 | orchestrator | Monday 29 September 2025 05:57:55 +0000 (0:00:02.819) 0:00:20.683 ****** 2025-09-29 05:57:55.881221 | orchestrator | =============================================================================== 2025-09-29 05:57:55.881233 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.63s 2025-09-29 05:57:55.881244 | orchestrator | Install python3-docker -------------------------------------------------- 2.82s 2025-09-29 05:57:55.881254 | orchestrator | Apply netplan configuration --------------------------------------------- 2.53s 2025-09-29 05:57:55.881265 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-09-29 05:57:55.881276 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-09-29 05:57:55.881286 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-09-29 05:57:55.881297 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.56s 2025-09-29 05:57:55.881308 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.48s 2025-09-29 05:57:55.881319 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.41s 2025-09-29 05:57:55.881329 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2025-09-29 05:57:55.881340 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2025-09-29 05:57:55.881361 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-09-29 05:57:56.512251 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-29 05:58:08.595965 | orchestrator | 2025-09-29 05:58:08 | INFO  | Task 2eb90360-6a46-4f4c-9fc8-b7f340be030a (reboot) was prepared for execution. 2025-09-29 05:58:08.596096 | orchestrator | 2025-09-29 05:58:08 | INFO  | It takes a moment until task 2eb90360-6a46-4f4c-9fc8-b7f340be030a (reboot) has been started and output is visible here. 2025-09-29 05:58:17.905028 | orchestrator | 2025-09-29 05:58:17.905130 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.905148 | orchestrator | 2025-09-29 05:58:17.905184 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.905197 | orchestrator | Monday 29 September 2025 05:58:12 +0000 (0:00:00.187) 0:00:00.187 ****** 2025-09-29 05:58:17.905208 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:58:17.905220 | orchestrator | 2025-09-29 05:58:17.905231 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.905242 | orchestrator | Monday 29 September 2025 05:58:12 +0000 (0:00:00.099) 0:00:00.286 ****** 2025-09-29 05:58:17.905253 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:58:17.905264 | orchestrator | 2025-09-29 05:58:17.905275 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.905300 | orchestrator | Monday 29 September 2025 05:58:13 +0000 (0:00:00.829) 0:00:01.115 ****** 2025-09-29 05:58:17.905311 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:58:17.905322 | orchestrator | 2025-09-29 05:58:17.905333 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.905344 | orchestrator | 2025-09-29 05:58:17.905355 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.905366 | orchestrator | Monday 29 September 2025 05:58:13 +0000 (0:00:00.095) 0:00:01.210 ****** 2025-09-29 05:58:17.905377 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:58:17.905387 | orchestrator | 2025-09-29 05:58:17.905417 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.905428 | orchestrator | Monday 29 September 2025 05:58:13 +0000 (0:00:00.087) 0:00:01.298 ****** 2025-09-29 05:58:17.905450 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:58:17.905461 | orchestrator | 2025-09-29 05:58:17.905472 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.905483 | orchestrator | Monday 29 September 2025 05:58:14 +0000 (0:00:00.637) 0:00:01.935 ****** 2025-09-29 05:58:17.905494 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:58:17.905504 | orchestrator | 2025-09-29 05:58:17.905515 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.905526 | orchestrator | 2025-09-29 05:58:17.905537 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.905548 | orchestrator | Monday 29 September 2025 05:58:14 +0000 (0:00:00.110) 0:00:02.046 ****** 2025-09-29 05:58:17.905559 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:58:17.905570 | orchestrator | 2025-09-29 05:58:17.905580 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.905591 | orchestrator | Monday 29 September 2025 05:58:14 +0000 (0:00:00.155) 0:00:02.201 ****** 2025-09-29 05:58:17.905602 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:58:17.905613 | orchestrator | 2025-09-29 05:58:17.905624 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.905635 | orchestrator | Monday 29 September 2025 05:58:14 +0000 (0:00:00.656) 0:00:02.858 ****** 2025-09-29 05:58:17.905646 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:58:17.905657 | orchestrator | 2025-09-29 05:58:17.905668 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.905679 | orchestrator | 2025-09-29 05:58:17.905690 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.905701 | orchestrator | Monday 29 September 2025 05:58:15 +0000 (0:00:00.093) 0:00:02.952 ****** 2025-09-29 05:58:17.905711 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:58:17.905722 | orchestrator | 2025-09-29 05:58:17.905733 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.905744 | orchestrator | Monday 29 September 2025 05:58:15 +0000 (0:00:00.084) 0:00:03.036 ****** 2025-09-29 05:58:17.905755 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:58:17.905766 | orchestrator | 2025-09-29 05:58:17.905777 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.905788 | orchestrator | Monday 29 September 2025 05:58:15 +0000 (0:00:00.645) 0:00:03.681 ****** 2025-09-29 05:58:17.905809 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:58:17.905820 | orchestrator | 2025-09-29 05:58:17.905831 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.905842 | orchestrator | 2025-09-29 05:58:17.905892 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.905906 | orchestrator | Monday 29 September 2025 05:58:15 +0000 (0:00:00.120) 0:00:03.802 ****** 2025-09-29 05:58:17.905917 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:58:17.905927 | orchestrator | 2025-09-29 05:58:17.905938 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.905949 | orchestrator | Monday 29 September 2025 05:58:15 +0000 (0:00:00.108) 0:00:03.911 ****** 2025-09-29 05:58:17.905960 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:58:17.905970 | orchestrator | 2025-09-29 05:58:17.905981 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.905992 | orchestrator | Monday 29 September 2025 05:58:16 +0000 (0:00:00.668) 0:00:04.579 ****** 2025-09-29 05:58:17.906003 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:58:17.906060 | orchestrator | 2025-09-29 05:58:17.906075 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-29 05:58:17.906086 | orchestrator | 2025-09-29 05:58:17.906097 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-29 05:58:17.906107 | orchestrator | Monday 29 September 2025 05:58:16 +0000 (0:00:00.113) 0:00:04.692 ****** 2025-09-29 05:58:17.906118 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:58:17.906129 | orchestrator | 2025-09-29 05:58:17.906140 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-29 05:58:17.906151 | orchestrator | Monday 29 September 2025 05:58:16 +0000 (0:00:00.108) 0:00:04.801 ****** 2025-09-29 05:58:17.906162 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:58:17.906173 | orchestrator | 2025-09-29 05:58:17.906183 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-29 05:58:17.906194 | orchestrator | Monday 29 September 2025 05:58:17 +0000 (0:00:00.673) 0:00:05.475 ****** 2025-09-29 05:58:17.906223 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:58:17.906235 | orchestrator | 2025-09-29 05:58:17.906246 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:58:17.906258 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906271 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906282 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906300 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906311 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906321 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 05:58:17.906332 | orchestrator | 2025-09-29 05:58:17.906343 | orchestrator | 2025-09-29 05:58:17.906354 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:58:17.906365 | orchestrator | Monday 29 September 2025 05:58:17 +0000 (0:00:00.041) 0:00:05.516 ****** 2025-09-29 05:58:17.906376 | orchestrator | =============================================================================== 2025-09-29 05:58:17.906387 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.11s 2025-09-29 05:58:17.906397 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.64s 2025-09-29 05:58:17.906416 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-09-29 05:58:18.200136 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-29 05:58:30.047769 | orchestrator | 2025-09-29 05:58:30 | INFO  | Task 6b7137ca-ce20-4fcf-a053-ffbaefb5ecdf (wait-for-connection) was prepared for execution. 2025-09-29 05:58:30.047942 | orchestrator | 2025-09-29 05:58:30 | INFO  | It takes a moment until task 6b7137ca-ce20-4fcf-a053-ffbaefb5ecdf (wait-for-connection) has been started and output is visible here. 2025-09-29 05:58:45.512020 | orchestrator | 2025-09-29 05:58:45.512142 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-29 05:58:45.512161 | orchestrator | 2025-09-29 05:58:45.512174 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-29 05:58:45.512186 | orchestrator | Monday 29 September 2025 05:58:33 +0000 (0:00:00.211) 0:00:00.211 ****** 2025-09-29 05:58:45.512198 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:58:45.512210 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:58:45.512222 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:58:45.512233 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:58:45.512244 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:58:45.512255 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:58:45.512266 | orchestrator | 2025-09-29 05:58:45.512277 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:58:45.512289 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512302 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512313 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512324 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512335 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512347 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:58:45.512358 | orchestrator | 2025-09-29 05:58:45.512369 | orchestrator | 2025-09-29 05:58:45.512380 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:58:45.512391 | orchestrator | Monday 29 September 2025 05:58:45 +0000 (0:00:11.538) 0:00:11.750 ****** 2025-09-29 05:58:45.512403 | orchestrator | =============================================================================== 2025-09-29 05:58:45.512414 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.54s 2025-09-29 05:58:45.798845 | orchestrator | + osism apply hddtemp 2025-09-29 05:58:57.628955 | orchestrator | 2025-09-29 05:58:57 | INFO  | Task 20f5b6c8-e15f-4846-88ae-448024cecbd9 (hddtemp) was prepared for execution. 2025-09-29 05:58:57.629068 | orchestrator | 2025-09-29 05:58:57 | INFO  | It takes a moment until task 20f5b6c8-e15f-4846-88ae-448024cecbd9 (hddtemp) has been started and output is visible here. 2025-09-29 05:59:23.702771 | orchestrator | 2025-09-29 05:59:23.702931 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-29 05:59:23.702951 | orchestrator | 2025-09-29 05:59:23.702964 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-29 05:59:23.702976 | orchestrator | Monday 29 September 2025 05:59:01 +0000 (0:00:00.264) 0:00:00.264 ****** 2025-09-29 05:59:23.702988 | orchestrator | ok: [testbed-manager] 2025-09-29 05:59:23.703001 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:59:23.703012 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:59:23.703046 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:59:23.703057 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:59:23.703068 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:59:23.703078 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:59:23.703089 | orchestrator | 2025-09-29 05:59:23.703100 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-29 05:59:23.703111 | orchestrator | Monday 29 September 2025 05:59:02 +0000 (0:00:00.646) 0:00:00.910 ****** 2025-09-29 05:59:23.703139 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:59:23.703152 | orchestrator | 2025-09-29 05:59:23.703164 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-29 05:59:23.703175 | orchestrator | Monday 29 September 2025 05:59:03 +0000 (0:00:01.085) 0:00:01.996 ****** 2025-09-29 05:59:23.703185 | orchestrator | ok: [testbed-manager] 2025-09-29 05:59:23.703196 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:59:23.703207 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:59:23.703218 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:59:23.703228 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:59:23.703238 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:59:23.703249 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:59:23.703259 | orchestrator | 2025-09-29 05:59:23.703270 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-29 05:59:23.703281 | orchestrator | Monday 29 September 2025 05:59:05 +0000 (0:00:01.884) 0:00:03.881 ****** 2025-09-29 05:59:23.703292 | orchestrator | changed: [testbed-manager] 2025-09-29 05:59:23.703303 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:59:23.703316 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:59:23.703329 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:59:23.703341 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:59:23.703353 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:59:23.703365 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:59:23.703377 | orchestrator | 2025-09-29 05:59:23.703390 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-29 05:59:23.703403 | orchestrator | Monday 29 September 2025 05:59:06 +0000 (0:00:00.951) 0:00:04.832 ****** 2025-09-29 05:59:23.703415 | orchestrator | ok: [testbed-node-0] 2025-09-29 05:59:23.703427 | orchestrator | ok: [testbed-node-1] 2025-09-29 05:59:23.703440 | orchestrator | ok: [testbed-node-2] 2025-09-29 05:59:23.703452 | orchestrator | ok: [testbed-node-3] 2025-09-29 05:59:23.703465 | orchestrator | ok: [testbed-node-4] 2025-09-29 05:59:23.703476 | orchestrator | ok: [testbed-manager] 2025-09-29 05:59:23.703488 | orchestrator | ok: [testbed-node-5] 2025-09-29 05:59:23.703501 | orchestrator | 2025-09-29 05:59:23.703513 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-29 05:59:23.703525 | orchestrator | Monday 29 September 2025 05:59:07 +0000 (0:00:01.003) 0:00:05.836 ****** 2025-09-29 05:59:23.703538 | orchestrator | skipping: [testbed-node-0] 2025-09-29 05:59:23.703550 | orchestrator | skipping: [testbed-node-1] 2025-09-29 05:59:23.703563 | orchestrator | skipping: [testbed-node-2] 2025-09-29 05:59:23.703576 | orchestrator | changed: [testbed-manager] 2025-09-29 05:59:23.703588 | orchestrator | skipping: [testbed-node-3] 2025-09-29 05:59:23.703599 | orchestrator | skipping: [testbed-node-4] 2025-09-29 05:59:23.703609 | orchestrator | skipping: [testbed-node-5] 2025-09-29 05:59:23.703620 | orchestrator | 2025-09-29 05:59:23.703631 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-29 05:59:23.703641 | orchestrator | Monday 29 September 2025 05:59:07 +0000 (0:00:00.660) 0:00:06.496 ****** 2025-09-29 05:59:23.703652 | orchestrator | changed: [testbed-manager] 2025-09-29 05:59:23.703663 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:59:23.703674 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:59:23.703684 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:59:23.703695 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:59:23.703715 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:59:23.703726 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:59:23.703736 | orchestrator | 2025-09-29 05:59:23.703747 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-29 05:59:23.703758 | orchestrator | Monday 29 September 2025 05:59:20 +0000 (0:00:12.648) 0:00:19.145 ****** 2025-09-29 05:59:23.703769 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 05:59:23.703780 | orchestrator | 2025-09-29 05:59:23.703791 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-29 05:59:23.703802 | orchestrator | Monday 29 September 2025 05:59:21 +0000 (0:00:01.198) 0:00:20.344 ****** 2025-09-29 05:59:23.703813 | orchestrator | changed: [testbed-manager] 2025-09-29 05:59:23.703824 | orchestrator | changed: [testbed-node-1] 2025-09-29 05:59:23.703834 | orchestrator | changed: [testbed-node-3] 2025-09-29 05:59:23.703867 | orchestrator | changed: [testbed-node-0] 2025-09-29 05:59:23.703879 | orchestrator | changed: [testbed-node-2] 2025-09-29 05:59:23.703889 | orchestrator | changed: [testbed-node-4] 2025-09-29 05:59:23.703899 | orchestrator | changed: [testbed-node-5] 2025-09-29 05:59:23.703910 | orchestrator | 2025-09-29 05:59:23.703921 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 05:59:23.703932 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 05:59:23.703963 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.703975 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.703986 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.703997 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.704008 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.704024 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 05:59:23.704035 | orchestrator | 2025-09-29 05:59:23.704046 | orchestrator | 2025-09-29 05:59:23.704057 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 05:59:23.704068 | orchestrator | Monday 29 September 2025 05:59:23 +0000 (0:00:01.643) 0:00:21.987 ****** 2025-09-29 05:59:23.704079 | orchestrator | =============================================================================== 2025-09-29 05:59:23.704090 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.65s 2025-09-29 05:59:23.704101 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.88s 2025-09-29 05:59:23.704112 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.64s 2025-09-29 05:59:23.704122 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.20s 2025-09-29 05:59:23.704133 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.09s 2025-09-29 05:59:23.704144 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.00s 2025-09-29 05:59:23.704154 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.95s 2025-09-29 05:59:23.704165 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.66s 2025-09-29 05:59:23.704183 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.65s 2025-09-29 05:59:23.888142 | orchestrator | ++ semver latest 7.1.1 2025-09-29 05:59:23.932661 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-29 05:59:23.932747 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 05:59:23.932760 | orchestrator | + sudo systemctl restart manager.service 2025-09-29 05:59:37.572350 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-29 05:59:37.572462 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-29 05:59:37.572478 | orchestrator | + local max_attempts=60 2025-09-29 05:59:37.572492 | orchestrator | + local name=ceph-ansible 2025-09-29 05:59:37.572504 | orchestrator | + local attempt_num=1 2025-09-29 05:59:37.572515 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:59:37.607232 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:59:37.607295 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 05:59:37.607308 | orchestrator | + sleep 5 2025-09-29 05:59:42.612021 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:59:42.640025 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:59:42.640094 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 05:59:42.640108 | orchestrator | + sleep 5 2025-09-29 05:59:47.643203 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:59:47.681338 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:59:47.681443 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 05:59:47.681460 | orchestrator | + sleep 5 2025-09-29 05:59:52.684823 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:59:52.721210 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:59:52.721476 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 05:59:52.721508 | orchestrator | + sleep 5 2025-09-29 05:59:57.726149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 05:59:57.770455 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 05:59:57.770559 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 05:59:57.770575 | orchestrator | + sleep 5 2025-09-29 06:00:02.774335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:02.808650 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:02.808932 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:02.809031 | orchestrator | + sleep 5 2025-09-29 06:00:07.812863 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:07.843122 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:07.843197 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:07.843212 | orchestrator | + sleep 5 2025-09-29 06:00:12.848068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:12.900650 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:12.900719 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:12.900731 | orchestrator | + sleep 5 2025-09-29 06:00:17.903659 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:17.964623 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:17.964688 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:17.964701 | orchestrator | + sleep 5 2025-09-29 06:00:22.968990 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:23.006913 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:23.006996 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:23.007010 | orchestrator | + sleep 5 2025-09-29 06:00:28.011191 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:28.048508 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:28.048584 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:28.048599 | orchestrator | + sleep 5 2025-09-29 06:00:33.051668 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:33.091412 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:33.091492 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:33.091506 | orchestrator | + sleep 5 2025-09-29 06:00:38.096567 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:38.130342 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:38.130465 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-29 06:00:38.130479 | orchestrator | + sleep 5 2025-09-29 06:00:43.134121 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-29 06:00:43.162334 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:43.162399 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-29 06:00:43.162412 | orchestrator | + local max_attempts=60 2025-09-29 06:00:43.162417 | orchestrator | + local name=kolla-ansible 2025-09-29 06:00:43.162422 | orchestrator | + local attempt_num=1 2025-09-29 06:00:43.162607 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-29 06:00:43.191556 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:43.191649 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-29 06:00:43.191668 | orchestrator | + local max_attempts=60 2025-09-29 06:00:43.191687 | orchestrator | + local name=osism-ansible 2025-09-29 06:00:43.191704 | orchestrator | + local attempt_num=1 2025-09-29 06:00:43.191721 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-29 06:00:43.217644 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-29 06:00:43.217695 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-29 06:00:43.217708 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-29 06:00:43.362996 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-29 06:00:43.483432 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-29 06:00:43.624724 | orchestrator | ARA in osism-ansible already disabled. 2025-09-29 06:00:43.755452 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-29 06:00:43.757090 | orchestrator | + osism apply gather-facts 2025-09-29 06:00:55.563885 | orchestrator | 2025-09-29 06:00:55 | INFO  | Task a81fe390-297e-4b5e-ade1-a285cf9ead9f (gather-facts) was prepared for execution. 2025-09-29 06:00:55.564003 | orchestrator | 2025-09-29 06:00:55 | INFO  | It takes a moment until task a81fe390-297e-4b5e-ade1-a285cf9ead9f (gather-facts) has been started and output is visible here. 2025-09-29 06:01:07.992087 | orchestrator | 2025-09-29 06:01:07.992204 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 06:01:07.992221 | orchestrator | 2025-09-29 06:01:07.992241 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 06:01:07.992261 | orchestrator | Monday 29 September 2025 06:00:59 +0000 (0:00:00.221) 0:00:00.221 ****** 2025-09-29 06:01:07.992280 | orchestrator | ok: [testbed-manager] 2025-09-29 06:01:07.992300 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:01:07.992318 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:01:07.992335 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:01:07.992353 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:01:07.992372 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:01:07.992389 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:01:07.992407 | orchestrator | 2025-09-29 06:01:07.992425 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-29 06:01:07.992443 | orchestrator | 2025-09-29 06:01:07.992462 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-29 06:01:07.992482 | orchestrator | Monday 29 September 2025 06:01:07 +0000 (0:00:07.996) 0:00:08.217 ****** 2025-09-29 06:01:07.992501 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:01:07.992521 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:01:07.992533 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:01:07.992544 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:01:07.992555 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:01:07.992566 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:01:07.992577 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:01:07.992588 | orchestrator | 2025-09-29 06:01:07.992599 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:01:07.992610 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992624 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992666 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992680 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992694 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992708 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992721 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:01:07.992734 | orchestrator | 2025-09-29 06:01:07.992748 | orchestrator | 2025-09-29 06:01:07.992761 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:01:07.992774 | orchestrator | Monday 29 September 2025 06:01:07 +0000 (0:00:00.458) 0:00:08.676 ****** 2025-09-29 06:01:07.992788 | orchestrator | =============================================================================== 2025-09-29 06:01:07.992801 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.00s 2025-09-29 06:01:07.992814 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-09-29 06:01:08.285982 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-29 06:01:08.297007 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-29 06:01:08.312558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-29 06:01:08.322460 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-29 06:01:08.332741 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-29 06:01:08.352287 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-29 06:01:08.369580 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-29 06:01:08.385106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-29 06:01:08.402898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-29 06:01:08.421920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-29 06:01:08.436935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-29 06:01:08.456628 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-29 06:01:08.475162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-29 06:01:08.494568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-29 06:01:08.506867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-29 06:01:08.529180 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-29 06:01:08.545009 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-29 06:01:08.560391 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-29 06:01:08.572207 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-29 06:01:08.582730 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-29 06:01:08.594344 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-29 06:01:08.783459 | orchestrator | ok: Runtime: 0:22:20.619368 2025-09-29 06:01:08.875263 | 2025-09-29 06:01:08.875434 | TASK [Deploy services] 2025-09-29 06:01:09.406709 | orchestrator | skipping: Conditional result was False 2025-09-29 06:01:09.425774 | 2025-09-29 06:01:09.425938 | TASK [Deploy in a nutshell] 2025-09-29 06:01:10.101199 | orchestrator | + set -e 2025-09-29 06:01:10.102659 | orchestrator | 2025-09-29 06:01:10.102682 | orchestrator | # PULL IMAGES 2025-09-29 06:01:10.102688 | orchestrator | 2025-09-29 06:01:10.102696 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 06:01:10.102706 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 06:01:10.102713 | orchestrator | ++ INTERACTIVE=false 2025-09-29 06:01:10.102733 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 06:01:10.102743 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 06:01:10.102749 | orchestrator | + source /opt/manager-vars.sh 2025-09-29 06:01:10.102753 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-29 06:01:10.102760 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-29 06:01:10.102764 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-29 06:01:10.102771 | orchestrator | ++ CEPH_VERSION=reef 2025-09-29 06:01:10.102775 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-29 06:01:10.102783 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-29 06:01:10.102787 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:01:10.102794 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:01:10.102798 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-29 06:01:10.102802 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-29 06:01:10.102806 | orchestrator | ++ export ARA=false 2025-09-29 06:01:10.102810 | orchestrator | ++ ARA=false 2025-09-29 06:01:10.102814 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-29 06:01:10.102817 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-29 06:01:10.102833 | orchestrator | ++ export TEMPEST=false 2025-09-29 06:01:10.102840 | orchestrator | ++ TEMPEST=false 2025-09-29 06:01:10.102846 | orchestrator | ++ export IS_ZUUL=true 2025-09-29 06:01:10.102850 | orchestrator | ++ IS_ZUUL=true 2025-09-29 06:01:10.102854 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:01:10.102858 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:01:10.102862 | orchestrator | ++ export EXTERNAL_API=false 2025-09-29 06:01:10.102865 | orchestrator | ++ EXTERNAL_API=false 2025-09-29 06:01:10.102869 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-29 06:01:10.102873 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-29 06:01:10.102877 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-29 06:01:10.102880 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-29 06:01:10.102884 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-29 06:01:10.102893 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-29 06:01:10.102896 | orchestrator | + echo 2025-09-29 06:01:10.102900 | orchestrator | + echo '# PULL IMAGES' 2025-09-29 06:01:10.102904 | orchestrator | + echo 2025-09-29 06:01:10.102936 | orchestrator | ++ semver latest 7.0.0 2025-09-29 06:01:10.155332 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-29 06:01:10.155381 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 06:01:10.155388 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-29 06:01:12.030597 | orchestrator | 2025-09-29 06:01:12 | INFO  | Trying to run play pull-images in environment custom 2025-09-29 06:01:22.132137 | orchestrator | 2025-09-29 06:01:22 | INFO  | Task 9c8bf2e0-4f0e-4c67-8020-e4e142b89ef1 (pull-images) was prepared for execution. 2025-09-29 06:01:22.132259 | orchestrator | 2025-09-29 06:01:22 | INFO  | Task 9c8bf2e0-4f0e-4c67-8020-e4e142b89ef1 is running in background. No more output. Check ARA for logs. 2025-09-29 06:01:24.344606 | orchestrator | 2025-09-29 06:01:24 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-29 06:01:34.424034 | orchestrator | 2025-09-29 06:01:34 | INFO  | Task 99fc5715-3770-441a-847f-8160f5ae1761 (wipe-partitions) was prepared for execution. 2025-09-29 06:01:34.424152 | orchestrator | 2025-09-29 06:01:34 | INFO  | It takes a moment until task 99fc5715-3770-441a-847f-8160f5ae1761 (wipe-partitions) has been started and output is visible here. 2025-09-29 06:01:47.913931 | orchestrator | 2025-09-29 06:01:47.914074 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-29 06:01:47.914094 | orchestrator | 2025-09-29 06:01:47.914106 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-29 06:01:47.914125 | orchestrator | Monday 29 September 2025 06:01:39 +0000 (0:00:00.171) 0:00:00.171 ****** 2025-09-29 06:01:47.914138 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:01:47.914150 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:01:47.914161 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:01:47.914173 | orchestrator | 2025-09-29 06:01:47.914184 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-29 06:01:47.914218 | orchestrator | Monday 29 September 2025 06:01:40 +0000 (0:00:01.512) 0:00:01.683 ****** 2025-09-29 06:01:47.914230 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:01:47.914241 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:01:47.914256 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:01:47.914267 | orchestrator | 2025-09-29 06:01:47.914278 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-29 06:01:47.914288 | orchestrator | Monday 29 September 2025 06:01:41 +0000 (0:00:00.240) 0:00:01.923 ****** 2025-09-29 06:01:47.914299 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:01:47.914310 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:01:47.914321 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:01:47.914331 | orchestrator | 2025-09-29 06:01:47.914343 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-29 06:01:47.914353 | orchestrator | Monday 29 September 2025 06:01:41 +0000 (0:00:00.689) 0:00:02.613 ****** 2025-09-29 06:01:47.914364 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:01:47.914375 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:01:47.914385 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:01:47.914396 | orchestrator | 2025-09-29 06:01:47.914407 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-29 06:01:47.914418 | orchestrator | Monday 29 September 2025 06:01:42 +0000 (0:00:00.254) 0:00:02.868 ****** 2025-09-29 06:01:47.914429 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-29 06:01:47.914443 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-29 06:01:47.914454 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-29 06:01:47.914468 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-29 06:01:47.914481 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-29 06:01:47.914493 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-29 06:01:47.914505 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-29 06:01:47.914517 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-29 06:01:47.914530 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-29 06:01:47.914542 | orchestrator | 2025-09-29 06:01:47.914556 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-29 06:01:47.914569 | orchestrator | Monday 29 September 2025 06:01:43 +0000 (0:00:01.063) 0:00:03.932 ****** 2025-09-29 06:01:47.914582 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-29 06:01:47.914594 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-29 06:01:47.914607 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-29 06:01:47.914620 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-29 06:01:47.914632 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-29 06:01:47.914645 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-29 06:01:47.914657 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-29 06:01:47.914670 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-29 06:01:47.914682 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-29 06:01:47.914695 | orchestrator | 2025-09-29 06:01:47.914708 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-29 06:01:47.914720 | orchestrator | Monday 29 September 2025 06:01:44 +0000 (0:00:01.267) 0:00:05.200 ****** 2025-09-29 06:01:47.914732 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-29 06:01:47.914745 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-29 06:01:47.914757 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-29 06:01:47.914770 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-29 06:01:47.914782 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-29 06:01:47.914800 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-29 06:01:47.914813 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-29 06:01:47.914861 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-29 06:01:47.914873 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-29 06:01:47.914883 | orchestrator | 2025-09-29 06:01:47.914894 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-29 06:01:47.914905 | orchestrator | Monday 29 September 2025 06:01:46 +0000 (0:00:02.056) 0:00:07.256 ****** 2025-09-29 06:01:47.914915 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:01:47.914926 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:01:47.914937 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:01:47.914955 | orchestrator | 2025-09-29 06:01:47.914972 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-29 06:01:47.914983 | orchestrator | Monday 29 September 2025 06:01:46 +0000 (0:00:00.555) 0:00:07.812 ****** 2025-09-29 06:01:47.914994 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:01:47.915005 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:01:47.915015 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:01:47.915026 | orchestrator | 2025-09-29 06:01:47.915036 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:01:47.915049 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:01:47.915061 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:01:47.915088 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:01:47.915100 | orchestrator | 2025-09-29 06:01:47.915110 | orchestrator | 2025-09-29 06:01:47.915121 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:01:47.915132 | orchestrator | Monday 29 September 2025 06:01:47 +0000 (0:00:00.580) 0:00:08.392 ****** 2025-09-29 06:01:47.915143 | orchestrator | =============================================================================== 2025-09-29 06:01:47.915153 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.06s 2025-09-29 06:01:47.915164 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 1.51s 2025-09-29 06:01:47.915174 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.27s 2025-09-29 06:01:47.915185 | orchestrator | Check device availability ----------------------------------------------- 1.06s 2025-09-29 06:01:47.915195 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.69s 2025-09-29 06:01:47.915206 | orchestrator | Request device events from the kernel ----------------------------------- 0.58s 2025-09-29 06:01:47.915216 | orchestrator | Reload udev rules ------------------------------------------------------- 0.56s 2025-09-29 06:01:47.915227 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-09-29 06:01:47.915238 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-09-29 06:02:00.136409 | orchestrator | 2025-09-29 06:02:00 | INFO  | Task bb35e112-dc3c-4ed9-bab5-242cda9220c1 (facts) was prepared for execution. 2025-09-29 06:02:00.136506 | orchestrator | 2025-09-29 06:02:00 | INFO  | It takes a moment until task bb35e112-dc3c-4ed9-bab5-242cda9220c1 (facts) has been started and output is visible here. 2025-09-29 06:02:11.955611 | orchestrator | 2025-09-29 06:02:11.955708 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-29 06:02:11.955719 | orchestrator | 2025-09-29 06:02:11.955727 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-29 06:02:11.955734 | orchestrator | Monday 29 September 2025 06:02:03 +0000 (0:00:00.247) 0:00:00.247 ****** 2025-09-29 06:02:11.955741 | orchestrator | ok: [testbed-manager] 2025-09-29 06:02:11.955749 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:02:11.955756 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:02:11.955785 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:02:11.955791 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:11.955797 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:11.955803 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:11.955809 | orchestrator | 2025-09-29 06:02:11.955875 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-29 06:02:11.955882 | orchestrator | Monday 29 September 2025 06:02:04 +0000 (0:00:00.893) 0:00:01.140 ****** 2025-09-29 06:02:11.955888 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:02:11.955896 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:02:11.955902 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:02:11.955909 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:02:11.955915 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:11.955921 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:11.955928 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:11.955934 | orchestrator | 2025-09-29 06:02:11.955940 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 06:02:11.955946 | orchestrator | 2025-09-29 06:02:11.955953 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 06:02:11.955959 | orchestrator | Monday 29 September 2025 06:02:05 +0000 (0:00:01.063) 0:00:02.204 ****** 2025-09-29 06:02:11.955965 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:02:11.955972 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:02:11.955978 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:02:11.955985 | orchestrator | ok: [testbed-manager] 2025-09-29 06:02:11.955991 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:11.955997 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:11.956003 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:11.956010 | orchestrator | 2025-09-29 06:02:11.956016 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-29 06:02:11.956022 | orchestrator | 2025-09-29 06:02:11.956028 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-29 06:02:11.956050 | orchestrator | Monday 29 September 2025 06:02:11 +0000 (0:00:05.347) 0:00:07.552 ****** 2025-09-29 06:02:11.956056 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:02:11.956063 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:02:11.956069 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:02:11.956075 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:02:11.956083 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:11.956089 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:11.956096 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:11.956102 | orchestrator | 2025-09-29 06:02:11.956109 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:02:11.956116 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956125 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956132 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956138 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956144 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956150 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956156 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:02:11.956161 | orchestrator | 2025-09-29 06:02:11.956174 | orchestrator | 2025-09-29 06:02:11.956180 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:02:11.956187 | orchestrator | Monday 29 September 2025 06:02:11 +0000 (0:00:00.485) 0:00:08.037 ****** 2025-09-29 06:02:11.956193 | orchestrator | =============================================================================== 2025-09-29 06:02:11.956199 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2025-09-29 06:02:11.956206 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-09-29 06:02:11.956212 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.89s 2025-09-29 06:02:11.956219 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-09-29 06:02:13.933986 | orchestrator | 2025-09-29 06:02:13 | INFO  | Task f6c06ccb-2322-45db-8e35-47345413fd8b (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-29 06:02:13.934116 | orchestrator | 2025-09-29 06:02:13 | INFO  | It takes a moment until task f6c06ccb-2322-45db-8e35-47345413fd8b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-29 06:02:24.180612 | orchestrator | 2025-09-29 06:02:24.180723 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-29 06:02:24.180739 | orchestrator | 2025-09-29 06:02:24.180750 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:02:24.180767 | orchestrator | Monday 29 September 2025 06:02:17 +0000 (0:00:00.296) 0:00:00.296 ****** 2025-09-29 06:02:24.180778 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:24.180789 | orchestrator | 2025-09-29 06:02:24.180799 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:02:24.180841 | orchestrator | Monday 29 September 2025 06:02:17 +0000 (0:00:00.218) 0:00:00.514 ****** 2025-09-29 06:02:24.180853 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:24.180863 | orchestrator | 2025-09-29 06:02:24.180872 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.180882 | orchestrator | Monday 29 September 2025 06:02:18 +0000 (0:00:00.185) 0:00:00.699 ****** 2025-09-29 06:02:24.180892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-29 06:02:24.180901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-29 06:02:24.180911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-29 06:02:24.180921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-29 06:02:24.180930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-29 06:02:24.180939 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-29 06:02:24.180949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-29 06:02:24.180958 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-29 06:02:24.180968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-29 06:02:24.180977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-29 06:02:24.180986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-29 06:02:24.181003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-29 06:02:24.181013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-29 06:02:24.181023 | orchestrator | 2025-09-29 06:02:24.181032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181042 | orchestrator | Monday 29 September 2025 06:02:18 +0000 (0:00:00.318) 0:00:01.018 ****** 2025-09-29 06:02:24.181051 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181076 | orchestrator | 2025-09-29 06:02:24.181086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181095 | orchestrator | Monday 29 September 2025 06:02:18 +0000 (0:00:00.346) 0:00:01.364 ****** 2025-09-29 06:02:24.181104 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181114 | orchestrator | 2025-09-29 06:02:24.181123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181132 | orchestrator | Monday 29 September 2025 06:02:18 +0000 (0:00:00.182) 0:00:01.547 ****** 2025-09-29 06:02:24.181142 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181151 | orchestrator | 2025-09-29 06:02:24.181161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181172 | orchestrator | Monday 29 September 2025 06:02:19 +0000 (0:00:00.168) 0:00:01.715 ****** 2025-09-29 06:02:24.181184 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181198 | orchestrator | 2025-09-29 06:02:24.181210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181221 | orchestrator | Monday 29 September 2025 06:02:19 +0000 (0:00:00.171) 0:00:01.886 ****** 2025-09-29 06:02:24.181232 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181244 | orchestrator | 2025-09-29 06:02:24.181255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181266 | orchestrator | Monday 29 September 2025 06:02:19 +0000 (0:00:00.171) 0:00:02.057 ****** 2025-09-29 06:02:24.181277 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181289 | orchestrator | 2025-09-29 06:02:24.181300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181311 | orchestrator | Monday 29 September 2025 06:02:19 +0000 (0:00:00.167) 0:00:02.225 ****** 2025-09-29 06:02:24.181322 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181333 | orchestrator | 2025-09-29 06:02:24.181344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181355 | orchestrator | Monday 29 September 2025 06:02:19 +0000 (0:00:00.168) 0:00:02.393 ****** 2025-09-29 06:02:24.181366 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181378 | orchestrator | 2025-09-29 06:02:24.181389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181400 | orchestrator | Monday 29 September 2025 06:02:20 +0000 (0:00:00.207) 0:00:02.600 ****** 2025-09-29 06:02:24.181411 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4) 2025-09-29 06:02:24.181421 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4) 2025-09-29 06:02:24.181431 | orchestrator | 2025-09-29 06:02:24.181440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181450 | orchestrator | Monday 29 September 2025 06:02:20 +0000 (0:00:00.372) 0:00:02.973 ****** 2025-09-29 06:02:24.181474 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc) 2025-09-29 06:02:24.181484 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc) 2025-09-29 06:02:24.181494 | orchestrator | 2025-09-29 06:02:24.181503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181513 | orchestrator | Monday 29 September 2025 06:02:20 +0000 (0:00:00.367) 0:00:03.340 ****** 2025-09-29 06:02:24.181522 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f) 2025-09-29 06:02:24.181531 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f) 2025-09-29 06:02:24.181541 | orchestrator | 2025-09-29 06:02:24.181550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181560 | orchestrator | Monday 29 September 2025 06:02:21 +0000 (0:00:00.530) 0:00:03.870 ****** 2025-09-29 06:02:24.181569 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a) 2025-09-29 06:02:24.181585 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a) 2025-09-29 06:02:24.181595 | orchestrator | 2025-09-29 06:02:24.181604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:24.181614 | orchestrator | Monday 29 September 2025 06:02:21 +0000 (0:00:00.511) 0:00:04.382 ****** 2025-09-29 06:02:24.181623 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:02:24.181633 | orchestrator | 2025-09-29 06:02:24.181642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181656 | orchestrator | Monday 29 September 2025 06:02:22 +0000 (0:00:00.571) 0:00:04.954 ****** 2025-09-29 06:02:24.181666 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-29 06:02:24.181675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-29 06:02:24.181685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-29 06:02:24.181694 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-29 06:02:24.181703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-29 06:02:24.181713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-29 06:02:24.181722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-29 06:02:24.181731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-29 06:02:24.181741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-29 06:02:24.181750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-29 06:02:24.181759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-29 06:02:24.181768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-29 06:02:24.181778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-29 06:02:24.181787 | orchestrator | 2025-09-29 06:02:24.181797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181806 | orchestrator | Monday 29 September 2025 06:02:22 +0000 (0:00:00.328) 0:00:05.282 ****** 2025-09-29 06:02:24.181828 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181838 | orchestrator | 2025-09-29 06:02:24.181848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181857 | orchestrator | Monday 29 September 2025 06:02:22 +0000 (0:00:00.174) 0:00:05.457 ****** 2025-09-29 06:02:24.181866 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181876 | orchestrator | 2025-09-29 06:02:24.181885 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181895 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.184) 0:00:05.641 ****** 2025-09-29 06:02:24.181904 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181913 | orchestrator | 2025-09-29 06:02:24.181923 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181932 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.183) 0:00:05.825 ****** 2025-09-29 06:02:24.181941 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181951 | orchestrator | 2025-09-29 06:02:24.181960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.181970 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.203) 0:00:06.029 ****** 2025-09-29 06:02:24.181979 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.181988 | orchestrator | 2025-09-29 06:02:24.182004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.182058 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.176) 0:00:06.205 ****** 2025-09-29 06:02:24.182070 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.182080 | orchestrator | 2025-09-29 06:02:24.182089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.182099 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.167) 0:00:06.372 ****** 2025-09-29 06:02:24.182108 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:24.182117 | orchestrator | 2025-09-29 06:02:24.182127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:24.182136 | orchestrator | Monday 29 September 2025 06:02:23 +0000 (0:00:00.173) 0:00:06.546 ****** 2025-09-29 06:02:24.182153 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.376527 | orchestrator | 2025-09-29 06:02:30.376639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:30.376656 | orchestrator | Monday 29 September 2025 06:02:24 +0000 (0:00:00.185) 0:00:06.731 ****** 2025-09-29 06:02:30.376668 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-29 06:02:30.376681 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-29 06:02:30.376692 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-29 06:02:30.376703 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-29 06:02:30.376714 | orchestrator | 2025-09-29 06:02:30.376725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:30.376736 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.834) 0:00:07.566 ****** 2025-09-29 06:02:30.376746 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.376757 | orchestrator | 2025-09-29 06:02:30.376768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:30.376778 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.172) 0:00:07.738 ****** 2025-09-29 06:02:30.376789 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.376799 | orchestrator | 2025-09-29 06:02:30.376860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:30.376879 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.192) 0:00:07.931 ****** 2025-09-29 06:02:30.376895 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.376912 | orchestrator | 2025-09-29 06:02:30.376931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:30.376949 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.152) 0:00:08.084 ****** 2025-09-29 06:02:30.376967 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.376986 | orchestrator | 2025-09-29 06:02:30.376998 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-29 06:02:30.377008 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.180) 0:00:08.264 ****** 2025-09-29 06:02:30.377019 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-29 06:02:30.377030 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-29 06:02:30.377041 | orchestrator | 2025-09-29 06:02:30.377051 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-29 06:02:30.377065 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.149) 0:00:08.414 ****** 2025-09-29 06:02:30.377096 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377110 | orchestrator | 2025-09-29 06:02:30.377123 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-29 06:02:30.377136 | orchestrator | Monday 29 September 2025 06:02:25 +0000 (0:00:00.121) 0:00:08.536 ****** 2025-09-29 06:02:30.377149 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377161 | orchestrator | 2025-09-29 06:02:30.377178 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-29 06:02:30.377198 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.119) 0:00:08.655 ****** 2025-09-29 06:02:30.377216 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377260 | orchestrator | 2025-09-29 06:02:30.377278 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-29 06:02:30.377296 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.125) 0:00:08.780 ****** 2025-09-29 06:02:30.377313 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:30.377333 | orchestrator | 2025-09-29 06:02:30.377352 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-29 06:02:30.377371 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.120) 0:00:08.901 ****** 2025-09-29 06:02:30.377389 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'da34c784-00a3-5dad-8c50-6eedba006e78'}}) 2025-09-29 06:02:30.377409 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b44ac90-f026-5081-896e-3232400f6176'}}) 2025-09-29 06:02:30.377426 | orchestrator | 2025-09-29 06:02:30.377446 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-29 06:02:30.377464 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.140) 0:00:09.042 ****** 2025-09-29 06:02:30.377483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'da34c784-00a3-5dad-8c50-6eedba006e78'}})  2025-09-29 06:02:30.377504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b44ac90-f026-5081-896e-3232400f6176'}})  2025-09-29 06:02:30.377515 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377526 | orchestrator | 2025-09-29 06:02:30.377536 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-29 06:02:30.377547 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.131) 0:00:09.173 ****** 2025-09-29 06:02:30.377558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'da34c784-00a3-5dad-8c50-6eedba006e78'}})  2025-09-29 06:02:30.377569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b44ac90-f026-5081-896e-3232400f6176'}})  2025-09-29 06:02:30.377579 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377589 | orchestrator | 2025-09-29 06:02:30.377600 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-29 06:02:30.377611 | orchestrator | Monday 29 September 2025 06:02:26 +0000 (0:00:00.256) 0:00:09.430 ****** 2025-09-29 06:02:30.377621 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'da34c784-00a3-5dad-8c50-6eedba006e78'}})  2025-09-29 06:02:30.377632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b44ac90-f026-5081-896e-3232400f6176'}})  2025-09-29 06:02:30.377643 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377653 | orchestrator | 2025-09-29 06:02:30.377684 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-29 06:02:30.377696 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.136) 0:00:09.566 ****** 2025-09-29 06:02:30.377707 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:30.377718 | orchestrator | 2025-09-29 06:02:30.377734 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-29 06:02:30.377746 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.120) 0:00:09.687 ****** 2025-09-29 06:02:30.377756 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:02:30.377767 | orchestrator | 2025-09-29 06:02:30.377777 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-29 06:02:30.377788 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.126) 0:00:09.813 ****** 2025-09-29 06:02:30.377799 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377836 | orchestrator | 2025-09-29 06:02:30.377849 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-29 06:02:30.377860 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.110) 0:00:09.924 ****** 2025-09-29 06:02:30.377871 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377881 | orchestrator | 2025-09-29 06:02:30.377903 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-29 06:02:30.377914 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.157) 0:00:10.082 ****** 2025-09-29 06:02:30.377925 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.377935 | orchestrator | 2025-09-29 06:02:30.377946 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-29 06:02:30.377956 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.112) 0:00:10.194 ****** 2025-09-29 06:02:30.377967 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:02:30.377978 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:30.377988 | orchestrator |  "sdb": { 2025-09-29 06:02:30.377999 | orchestrator |  "osd_lvm_uuid": "da34c784-00a3-5dad-8c50-6eedba006e78" 2025-09-29 06:02:30.378009 | orchestrator |  }, 2025-09-29 06:02:30.378080 | orchestrator |  "sdc": { 2025-09-29 06:02:30.378094 | orchestrator |  "osd_lvm_uuid": "5b44ac90-f026-5081-896e-3232400f6176" 2025-09-29 06:02:30.378104 | orchestrator |  } 2025-09-29 06:02:30.378115 | orchestrator |  } 2025-09-29 06:02:30.378126 | orchestrator | } 2025-09-29 06:02:30.378137 | orchestrator | 2025-09-29 06:02:30.378148 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-29 06:02:30.378158 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.133) 0:00:10.328 ****** 2025-09-29 06:02:30.378169 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.378179 | orchestrator | 2025-09-29 06:02:30.378190 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-29 06:02:30.378201 | orchestrator | Monday 29 September 2025 06:02:27 +0000 (0:00:00.118) 0:00:10.446 ****** 2025-09-29 06:02:30.378211 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.378222 | orchestrator | 2025-09-29 06:02:30.378232 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-29 06:02:30.378243 | orchestrator | Monday 29 September 2025 06:02:28 +0000 (0:00:00.119) 0:00:10.566 ****** 2025-09-29 06:02:30.378254 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:02:30.378264 | orchestrator | 2025-09-29 06:02:30.378275 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-29 06:02:30.378285 | orchestrator | Monday 29 September 2025 06:02:28 +0000 (0:00:00.118) 0:00:10.684 ****** 2025-09-29 06:02:30.378296 | orchestrator | changed: [testbed-node-3] => { 2025-09-29 06:02:30.378307 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-29 06:02:30.378318 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:30.378328 | orchestrator |  "sdb": { 2025-09-29 06:02:30.378339 | orchestrator |  "osd_lvm_uuid": "da34c784-00a3-5dad-8c50-6eedba006e78" 2025-09-29 06:02:30.378350 | orchestrator |  }, 2025-09-29 06:02:30.378361 | orchestrator |  "sdc": { 2025-09-29 06:02:30.378371 | orchestrator |  "osd_lvm_uuid": "5b44ac90-f026-5081-896e-3232400f6176" 2025-09-29 06:02:30.378382 | orchestrator |  } 2025-09-29 06:02:30.378392 | orchestrator |  }, 2025-09-29 06:02:30.378403 | orchestrator |  "lvm_volumes": [ 2025-09-29 06:02:30.378414 | orchestrator |  { 2025-09-29 06:02:30.378424 | orchestrator |  "data": "osd-block-da34c784-00a3-5dad-8c50-6eedba006e78", 2025-09-29 06:02:30.378435 | orchestrator |  "data_vg": "ceph-da34c784-00a3-5dad-8c50-6eedba006e78" 2025-09-29 06:02:30.378446 | orchestrator |  }, 2025-09-29 06:02:30.378456 | orchestrator |  { 2025-09-29 06:02:30.378467 | orchestrator |  "data": "osd-block-5b44ac90-f026-5081-896e-3232400f6176", 2025-09-29 06:02:30.378478 | orchestrator |  "data_vg": "ceph-5b44ac90-f026-5081-896e-3232400f6176" 2025-09-29 06:02:30.378488 | orchestrator |  } 2025-09-29 06:02:30.378499 | orchestrator |  ] 2025-09-29 06:02:30.378510 | orchestrator |  } 2025-09-29 06:02:30.378520 | orchestrator | } 2025-09-29 06:02:30.378531 | orchestrator | 2025-09-29 06:02:30.378548 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-29 06:02:30.378566 | orchestrator | Monday 29 September 2025 06:02:28 +0000 (0:00:00.292) 0:00:10.977 ****** 2025-09-29 06:02:30.378577 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:30.378588 | orchestrator | 2025-09-29 06:02:30.378599 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-29 06:02:30.378609 | orchestrator | 2025-09-29 06:02:30.378620 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:02:30.378630 | orchestrator | Monday 29 September 2025 06:02:29 +0000 (0:00:01.539) 0:00:12.516 ****** 2025-09-29 06:02:30.378641 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:30.378652 | orchestrator | 2025-09-29 06:02:30.378662 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:02:30.378673 | orchestrator | Monday 29 September 2025 06:02:30 +0000 (0:00:00.213) 0:00:12.730 ****** 2025-09-29 06:02:30.378684 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:30.378694 | orchestrator | 2025-09-29 06:02:30.378705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:30.378724 | orchestrator | Monday 29 September 2025 06:02:30 +0000 (0:00:00.196) 0:00:12.926 ****** 2025-09-29 06:02:36.943644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-29 06:02:36.943730 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-29 06:02:36.943740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-29 06:02:36.943747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-29 06:02:36.943754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-29 06:02:36.943760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-29 06:02:36.943767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-29 06:02:36.943773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-29 06:02:36.943780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-29 06:02:36.943786 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-29 06:02:36.943793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-29 06:02:36.943799 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-29 06:02:36.943805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-29 06:02:36.943848 | orchestrator | 2025-09-29 06:02:36.943856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.943869 | orchestrator | Monday 29 September 2025 06:02:30 +0000 (0:00:00.349) 0:00:13.276 ****** 2025-09-29 06:02:36.943879 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.943892 | orchestrator | 2025-09-29 06:02:36.943903 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.943914 | orchestrator | Monday 29 September 2025 06:02:30 +0000 (0:00:00.167) 0:00:13.444 ****** 2025-09-29 06:02:36.943924 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.943936 | orchestrator | 2025-09-29 06:02:36.943948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.943960 | orchestrator | Monday 29 September 2025 06:02:31 +0000 (0:00:00.158) 0:00:13.603 ****** 2025-09-29 06:02:36.943971 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.943978 | orchestrator | 2025-09-29 06:02:36.943984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.943991 | orchestrator | Monday 29 September 2025 06:02:31 +0000 (0:00:00.169) 0:00:13.772 ****** 2025-09-29 06:02:36.943997 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944020 | orchestrator | 2025-09-29 06:02:36.944027 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944033 | orchestrator | Monday 29 September 2025 06:02:31 +0000 (0:00:00.131) 0:00:13.903 ****** 2025-09-29 06:02:36.944039 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944045 | orchestrator | 2025-09-29 06:02:36.944051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944057 | orchestrator | Monday 29 September 2025 06:02:31 +0000 (0:00:00.407) 0:00:14.311 ****** 2025-09-29 06:02:36.944064 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944070 | orchestrator | 2025-09-29 06:02:36.944076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944082 | orchestrator | Monday 29 September 2025 06:02:31 +0000 (0:00:00.147) 0:00:14.459 ****** 2025-09-29 06:02:36.944101 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944108 | orchestrator | 2025-09-29 06:02:36.944114 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944120 | orchestrator | Monday 29 September 2025 06:02:32 +0000 (0:00:00.147) 0:00:14.606 ****** 2025-09-29 06:02:36.944126 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944133 | orchestrator | 2025-09-29 06:02:36.944139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944145 | orchestrator | Monday 29 September 2025 06:02:32 +0000 (0:00:00.159) 0:00:14.766 ****** 2025-09-29 06:02:36.944151 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254) 2025-09-29 06:02:36.944159 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254) 2025-09-29 06:02:36.944165 | orchestrator | 2025-09-29 06:02:36.944171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944177 | orchestrator | Monday 29 September 2025 06:02:32 +0000 (0:00:00.357) 0:00:15.123 ****** 2025-09-29 06:02:36.944183 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495) 2025-09-29 06:02:36.944189 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495) 2025-09-29 06:02:36.944195 | orchestrator | 2025-09-29 06:02:36.944203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944210 | orchestrator | Monday 29 September 2025 06:02:32 +0000 (0:00:00.379) 0:00:15.502 ****** 2025-09-29 06:02:36.944217 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee) 2025-09-29 06:02:36.944224 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee) 2025-09-29 06:02:36.944231 | orchestrator | 2025-09-29 06:02:36.944239 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944246 | orchestrator | Monday 29 September 2025 06:02:33 +0000 (0:00:00.640) 0:00:16.143 ****** 2025-09-29 06:02:36.944266 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60) 2025-09-29 06:02:36.944274 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60) 2025-09-29 06:02:36.944282 | orchestrator | 2025-09-29 06:02:36.944289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:36.944296 | orchestrator | Monday 29 September 2025 06:02:33 +0000 (0:00:00.391) 0:00:16.534 ****** 2025-09-29 06:02:36.944303 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:02:36.944310 | orchestrator | 2025-09-29 06:02:36.944317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944324 | orchestrator | Monday 29 September 2025 06:02:34 +0000 (0:00:00.297) 0:00:16.831 ****** 2025-09-29 06:02:36.944332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-29 06:02:36.944344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-29 06:02:36.944351 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-29 06:02:36.944358 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-29 06:02:36.944365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-29 06:02:36.944372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-29 06:02:36.944379 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-29 06:02:36.944386 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-29 06:02:36.944393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-29 06:02:36.944401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-29 06:02:36.944409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-29 06:02:36.944417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-29 06:02:36.944426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-29 06:02:36.944434 | orchestrator | 2025-09-29 06:02:36.944443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944451 | orchestrator | Monday 29 September 2025 06:02:34 +0000 (0:00:00.303) 0:00:17.135 ****** 2025-09-29 06:02:36.944459 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944468 | orchestrator | 2025-09-29 06:02:36.944476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944485 | orchestrator | Monday 29 September 2025 06:02:34 +0000 (0:00:00.154) 0:00:17.290 ****** 2025-09-29 06:02:36.944493 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944501 | orchestrator | 2025-09-29 06:02:36.944513 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944522 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.429) 0:00:17.719 ****** 2025-09-29 06:02:36.944530 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944538 | orchestrator | 2025-09-29 06:02:36.944547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944556 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.158) 0:00:17.878 ****** 2025-09-29 06:02:36.944564 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944571 | orchestrator | 2025-09-29 06:02:36.944578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944585 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.136) 0:00:18.014 ****** 2025-09-29 06:02:36.944592 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944600 | orchestrator | 2025-09-29 06:02:36.944607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944614 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.139) 0:00:18.153 ****** 2025-09-29 06:02:36.944621 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944628 | orchestrator | 2025-09-29 06:02:36.944635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944642 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.136) 0:00:18.290 ****** 2025-09-29 06:02:36.944649 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944657 | orchestrator | 2025-09-29 06:02:36.944664 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944671 | orchestrator | Monday 29 September 2025 06:02:35 +0000 (0:00:00.138) 0:00:18.428 ****** 2025-09-29 06:02:36.944678 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944685 | orchestrator | 2025-09-29 06:02:36.944692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944704 | orchestrator | Monday 29 September 2025 06:02:36 +0000 (0:00:00.151) 0:00:18.579 ****** 2025-09-29 06:02:36.944711 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-29 06:02:36.944720 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-29 06:02:36.944727 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-29 06:02:36.944734 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-29 06:02:36.944742 | orchestrator | 2025-09-29 06:02:36.944749 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:36.944756 | orchestrator | Monday 29 September 2025 06:02:36 +0000 (0:00:00.755) 0:00:19.335 ****** 2025-09-29 06:02:36.944763 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:36.944770 | orchestrator | 2025-09-29 06:02:36.944782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:42.649994 | orchestrator | Monday 29 September 2025 06:02:36 +0000 (0:00:00.159) 0:00:19.495 ****** 2025-09-29 06:02:42.650130 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650143 | orchestrator | 2025-09-29 06:02:42.650153 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:42.650161 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.194) 0:00:19.690 ****** 2025-09-29 06:02:42.650169 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650177 | orchestrator | 2025-09-29 06:02:42.650185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:42.650193 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.171) 0:00:19.861 ****** 2025-09-29 06:02:42.650201 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650209 | orchestrator | 2025-09-29 06:02:42.650217 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-29 06:02:42.650225 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.149) 0:00:20.011 ****** 2025-09-29 06:02:42.650233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-29 06:02:42.650241 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-29 06:02:42.650249 | orchestrator | 2025-09-29 06:02:42.650257 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-29 06:02:42.650264 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.262) 0:00:20.274 ****** 2025-09-29 06:02:42.650272 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650280 | orchestrator | 2025-09-29 06:02:42.650288 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-29 06:02:42.650296 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.126) 0:00:20.400 ****** 2025-09-29 06:02:42.650304 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650312 | orchestrator | 2025-09-29 06:02:42.650320 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-29 06:02:42.650328 | orchestrator | Monday 29 September 2025 06:02:37 +0000 (0:00:00.126) 0:00:20.526 ****** 2025-09-29 06:02:42.650336 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650343 | orchestrator | 2025-09-29 06:02:42.650352 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-29 06:02:42.650359 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.124) 0:00:20.650 ****** 2025-09-29 06:02:42.650367 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:42.650376 | orchestrator | 2025-09-29 06:02:42.650384 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-29 06:02:42.650391 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.125) 0:00:20.776 ****** 2025-09-29 06:02:42.650400 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}}) 2025-09-29 06:02:42.650409 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46f249ea-6148-566c-bc01-762c6d5847ca'}}) 2025-09-29 06:02:42.650417 | orchestrator | 2025-09-29 06:02:42.650425 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-29 06:02:42.650452 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.164) 0:00:20.940 ****** 2025-09-29 06:02:42.650461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}})  2025-09-29 06:02:42.650469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46f249ea-6148-566c-bc01-762c6d5847ca'}})  2025-09-29 06:02:42.650477 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650485 | orchestrator | 2025-09-29 06:02:42.650507 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-29 06:02:42.650516 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.147) 0:00:21.088 ****** 2025-09-29 06:02:42.650523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}})  2025-09-29 06:02:42.650531 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46f249ea-6148-566c-bc01-762c6d5847ca'}})  2025-09-29 06:02:42.650541 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650550 | orchestrator | 2025-09-29 06:02:42.650560 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-29 06:02:42.650569 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.149) 0:00:21.238 ****** 2025-09-29 06:02:42.650579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}})  2025-09-29 06:02:42.650587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46f249ea-6148-566c-bc01-762c6d5847ca'}})  2025-09-29 06:02:42.650598 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650606 | orchestrator | 2025-09-29 06:02:42.650615 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-29 06:02:42.650624 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.137) 0:00:21.376 ****** 2025-09-29 06:02:42.650633 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:42.650642 | orchestrator | 2025-09-29 06:02:42.650651 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-29 06:02:42.650661 | orchestrator | Monday 29 September 2025 06:02:38 +0000 (0:00:00.109) 0:00:21.485 ****** 2025-09-29 06:02:42.650670 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:02:42.650679 | orchestrator | 2025-09-29 06:02:42.650689 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-29 06:02:42.650698 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.131) 0:00:21.617 ****** 2025-09-29 06:02:42.650707 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650716 | orchestrator | 2025-09-29 06:02:42.650742 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-29 06:02:42.650751 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.151) 0:00:21.768 ****** 2025-09-29 06:02:42.650760 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650769 | orchestrator | 2025-09-29 06:02:42.650778 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-29 06:02:42.650787 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.312) 0:00:22.081 ****** 2025-09-29 06:02:42.650796 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650826 | orchestrator | 2025-09-29 06:02:42.650835 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-29 06:02:42.650845 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.119) 0:00:22.200 ****** 2025-09-29 06:02:42.650854 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:02:42.650864 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:42.650873 | orchestrator |  "sdb": { 2025-09-29 06:02:42.650883 | orchestrator |  "osd_lvm_uuid": "34f4ec66-7b15-5133-bf2a-17bf3a27b54a" 2025-09-29 06:02:42.650892 | orchestrator |  }, 2025-09-29 06:02:42.650900 | orchestrator |  "sdc": { 2025-09-29 06:02:42.650914 | orchestrator |  "osd_lvm_uuid": "46f249ea-6148-566c-bc01-762c6d5847ca" 2025-09-29 06:02:42.650922 | orchestrator |  } 2025-09-29 06:02:42.650930 | orchestrator |  } 2025-09-29 06:02:42.650938 | orchestrator | } 2025-09-29 06:02:42.650945 | orchestrator | 2025-09-29 06:02:42.650953 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-29 06:02:42.650961 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.114) 0:00:22.314 ****** 2025-09-29 06:02:42.650969 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.650977 | orchestrator | 2025-09-29 06:02:42.650984 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-29 06:02:42.650992 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.115) 0:00:22.429 ****** 2025-09-29 06:02:42.651000 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.651008 | orchestrator | 2025-09-29 06:02:42.651015 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-29 06:02:42.651023 | orchestrator | Monday 29 September 2025 06:02:39 +0000 (0:00:00.117) 0:00:22.547 ****** 2025-09-29 06:02:42.651031 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:02:42.651039 | orchestrator | 2025-09-29 06:02:42.651046 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-29 06:02:42.651054 | orchestrator | Monday 29 September 2025 06:02:40 +0000 (0:00:00.117) 0:00:22.665 ****** 2025-09-29 06:02:42.651062 | orchestrator | changed: [testbed-node-4] => { 2025-09-29 06:02:42.651069 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-29 06:02:42.651077 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:42.651085 | orchestrator |  "sdb": { 2025-09-29 06:02:42.651093 | orchestrator |  "osd_lvm_uuid": "34f4ec66-7b15-5133-bf2a-17bf3a27b54a" 2025-09-29 06:02:42.651100 | orchestrator |  }, 2025-09-29 06:02:42.651108 | orchestrator |  "sdc": { 2025-09-29 06:02:42.651116 | orchestrator |  "osd_lvm_uuid": "46f249ea-6148-566c-bc01-762c6d5847ca" 2025-09-29 06:02:42.651124 | orchestrator |  } 2025-09-29 06:02:42.651132 | orchestrator |  }, 2025-09-29 06:02:42.651139 | orchestrator |  "lvm_volumes": [ 2025-09-29 06:02:42.651147 | orchestrator |  { 2025-09-29 06:02:42.651155 | orchestrator |  "data": "osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a", 2025-09-29 06:02:42.651163 | orchestrator |  "data_vg": "ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a" 2025-09-29 06:02:42.651170 | orchestrator |  }, 2025-09-29 06:02:42.651178 | orchestrator |  { 2025-09-29 06:02:42.651186 | orchestrator |  "data": "osd-block-46f249ea-6148-566c-bc01-762c6d5847ca", 2025-09-29 06:02:42.651194 | orchestrator |  "data_vg": "ceph-46f249ea-6148-566c-bc01-762c6d5847ca" 2025-09-29 06:02:42.651201 | orchestrator |  } 2025-09-29 06:02:42.651209 | orchestrator |  ] 2025-09-29 06:02:42.651217 | orchestrator |  } 2025-09-29 06:02:42.651225 | orchestrator | } 2025-09-29 06:02:42.651233 | orchestrator | 2025-09-29 06:02:42.651240 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-29 06:02:42.651248 | orchestrator | Monday 29 September 2025 06:02:40 +0000 (0:00:00.195) 0:00:22.860 ****** 2025-09-29 06:02:42.651256 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:42.651264 | orchestrator | 2025-09-29 06:02:42.651271 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-29 06:02:42.651279 | orchestrator | 2025-09-29 06:02:42.651287 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:02:42.651295 | orchestrator | Monday 29 September 2025 06:02:41 +0000 (0:00:01.014) 0:00:23.874 ****** 2025-09-29 06:02:42.651303 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:42.651310 | orchestrator | 2025-09-29 06:02:42.651318 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:02:42.651326 | orchestrator | Monday 29 September 2025 06:02:41 +0000 (0:00:00.364) 0:00:24.239 ****** 2025-09-29 06:02:42.651338 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:42.651346 | orchestrator | 2025-09-29 06:02:42.651359 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:42.651367 | orchestrator | Monday 29 September 2025 06:02:42 +0000 (0:00:00.547) 0:00:24.786 ****** 2025-09-29 06:02:42.651375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-29 06:02:42.651382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-29 06:02:42.651390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-29 06:02:42.651398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-29 06:02:42.651405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-29 06:02:42.651413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-29 06:02:42.651425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-29 06:02:50.108593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-29 06:02:50.108699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-29 06:02:50.108712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-29 06:02:50.108722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-29 06:02:50.108733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-29 06:02:50.108743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-29 06:02:50.108753 | orchestrator | 2025-09-29 06:02:50.108764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.108774 | orchestrator | Monday 29 September 2025 06:02:42 +0000 (0:00:00.407) 0:00:25.194 ****** 2025-09-29 06:02:50.108784 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.108794 | orchestrator | 2025-09-29 06:02:50.108836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.108847 | orchestrator | Monday 29 September 2025 06:02:42 +0000 (0:00:00.200) 0:00:25.394 ****** 2025-09-29 06:02:50.108856 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.108866 | orchestrator | 2025-09-29 06:02:50.108876 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.108885 | orchestrator | Monday 29 September 2025 06:02:43 +0000 (0:00:00.209) 0:00:25.604 ****** 2025-09-29 06:02:50.108895 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.108905 | orchestrator | 2025-09-29 06:02:50.108914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.108924 | orchestrator | Monday 29 September 2025 06:02:43 +0000 (0:00:00.221) 0:00:25.826 ****** 2025-09-29 06:02:50.108933 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.108943 | orchestrator | 2025-09-29 06:02:50.108953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.108962 | orchestrator | Monday 29 September 2025 06:02:43 +0000 (0:00:00.209) 0:00:26.035 ****** 2025-09-29 06:02:50.108972 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.108982 | orchestrator | 2025-09-29 06:02:50.108991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109001 | orchestrator | Monday 29 September 2025 06:02:43 +0000 (0:00:00.201) 0:00:26.237 ****** 2025-09-29 06:02:50.109010 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109020 | orchestrator | 2025-09-29 06:02:50.109030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109039 | orchestrator | Monday 29 September 2025 06:02:43 +0000 (0:00:00.208) 0:00:26.445 ****** 2025-09-29 06:02:50.109049 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109083 | orchestrator | 2025-09-29 06:02:50.109093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109103 | orchestrator | Monday 29 September 2025 06:02:44 +0000 (0:00:00.209) 0:00:26.655 ****** 2025-09-29 06:02:50.109112 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109122 | orchestrator | 2025-09-29 06:02:50.109131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109143 | orchestrator | Monday 29 September 2025 06:02:44 +0000 (0:00:00.210) 0:00:26.866 ****** 2025-09-29 06:02:50.109155 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493) 2025-09-29 06:02:50.109167 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493) 2025-09-29 06:02:50.109178 | orchestrator | 2025-09-29 06:02:50.109189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109201 | orchestrator | Monday 29 September 2025 06:02:44 +0000 (0:00:00.670) 0:00:27.537 ****** 2025-09-29 06:02:50.109213 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1) 2025-09-29 06:02:50.109223 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1) 2025-09-29 06:02:50.109235 | orchestrator | 2025-09-29 06:02:50.109246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109256 | orchestrator | Monday 29 September 2025 06:02:45 +0000 (0:00:00.866) 0:00:28.404 ****** 2025-09-29 06:02:50.109267 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330) 2025-09-29 06:02:50.109279 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330) 2025-09-29 06:02:50.109290 | orchestrator | 2025-09-29 06:02:50.109301 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109311 | orchestrator | Monday 29 September 2025 06:02:46 +0000 (0:00:00.429) 0:00:28.834 ****** 2025-09-29 06:02:50.109322 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5) 2025-09-29 06:02:50.109334 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5) 2025-09-29 06:02:50.109345 | orchestrator | 2025-09-29 06:02:50.109356 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:02:50.109367 | orchestrator | Monday 29 September 2025 06:02:46 +0000 (0:00:00.400) 0:00:29.234 ****** 2025-09-29 06:02:50.109378 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:02:50.109388 | orchestrator | 2025-09-29 06:02:50.109399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109410 | orchestrator | Monday 29 September 2025 06:02:46 +0000 (0:00:00.287) 0:00:29.522 ****** 2025-09-29 06:02:50.109437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-29 06:02:50.109449 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-29 06:02:50.109460 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-29 06:02:50.109472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-29 06:02:50.109483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-29 06:02:50.109493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-29 06:02:50.109525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-29 06:02:50.109535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-29 06:02:50.109546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-29 06:02:50.109617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-29 06:02:50.109635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-29 06:02:50.109651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-29 06:02:50.109667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-29 06:02:50.109683 | orchestrator | 2025-09-29 06:02:50.109700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109717 | orchestrator | Monday 29 September 2025 06:02:47 +0000 (0:00:00.323) 0:00:29.845 ****** 2025-09-29 06:02:50.109727 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109736 | orchestrator | 2025-09-29 06:02:50.109746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109755 | orchestrator | Monday 29 September 2025 06:02:47 +0000 (0:00:00.178) 0:00:30.024 ****** 2025-09-29 06:02:50.109765 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109774 | orchestrator | 2025-09-29 06:02:50.109784 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109793 | orchestrator | Monday 29 September 2025 06:02:47 +0000 (0:00:00.165) 0:00:30.190 ****** 2025-09-29 06:02:50.109803 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109835 | orchestrator | 2025-09-29 06:02:50.109853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109863 | orchestrator | Monday 29 September 2025 06:02:47 +0000 (0:00:00.172) 0:00:30.362 ****** 2025-09-29 06:02:50.109872 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109882 | orchestrator | 2025-09-29 06:02:50.109891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109900 | orchestrator | Monday 29 September 2025 06:02:47 +0000 (0:00:00.169) 0:00:30.532 ****** 2025-09-29 06:02:50.109910 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109919 | orchestrator | 2025-09-29 06:02:50.109929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109938 | orchestrator | Monday 29 September 2025 06:02:48 +0000 (0:00:00.167) 0:00:30.699 ****** 2025-09-29 06:02:50.109947 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109957 | orchestrator | 2025-09-29 06:02:50.109966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.109976 | orchestrator | Monday 29 September 2025 06:02:48 +0000 (0:00:00.475) 0:00:31.175 ****** 2025-09-29 06:02:50.109985 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.109995 | orchestrator | 2025-09-29 06:02:50.110004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110058 | orchestrator | Monday 29 September 2025 06:02:48 +0000 (0:00:00.164) 0:00:31.339 ****** 2025-09-29 06:02:50.110071 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.110081 | orchestrator | 2025-09-29 06:02:50.110090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110100 | orchestrator | Monday 29 September 2025 06:02:48 +0000 (0:00:00.166) 0:00:31.505 ****** 2025-09-29 06:02:50.110109 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-29 06:02:50.110118 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-29 06:02:50.110128 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-29 06:02:50.110138 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-29 06:02:50.110147 | orchestrator | 2025-09-29 06:02:50.110156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110166 | orchestrator | Monday 29 September 2025 06:02:49 +0000 (0:00:00.552) 0:00:32.058 ****** 2025-09-29 06:02:50.110175 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.110185 | orchestrator | 2025-09-29 06:02:50.110194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110213 | orchestrator | Monday 29 September 2025 06:02:49 +0000 (0:00:00.150) 0:00:32.209 ****** 2025-09-29 06:02:50.110223 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.110232 | orchestrator | 2025-09-29 06:02:50.110242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110251 | orchestrator | Monday 29 September 2025 06:02:49 +0000 (0:00:00.174) 0:00:32.383 ****** 2025-09-29 06:02:50.110261 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.110270 | orchestrator | 2025-09-29 06:02:50.110280 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:02:50.110289 | orchestrator | Monday 29 September 2025 06:02:49 +0000 (0:00:00.139) 0:00:32.523 ****** 2025-09-29 06:02:50.110298 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:50.110308 | orchestrator | 2025-09-29 06:02:50.110317 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-29 06:02:50.110336 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.137) 0:00:32.661 ****** 2025-09-29 06:02:53.445291 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-29 06:02:53.445366 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-29 06:02:53.445372 | orchestrator | 2025-09-29 06:02:53.445378 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-29 06:02:53.445382 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.113) 0:00:32.775 ****** 2025-09-29 06:02:53.445386 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445391 | orchestrator | 2025-09-29 06:02:53.445395 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-29 06:02:53.445399 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.086) 0:00:32.862 ****** 2025-09-29 06:02:53.445402 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445406 | orchestrator | 2025-09-29 06:02:53.445410 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-29 06:02:53.445413 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.087) 0:00:32.950 ****** 2025-09-29 06:02:53.445417 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445421 | orchestrator | 2025-09-29 06:02:53.445425 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-29 06:02:53.445428 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.088) 0:00:33.038 ****** 2025-09-29 06:02:53.445432 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:53.445437 | orchestrator | 2025-09-29 06:02:53.445440 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-29 06:02:53.445444 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.215) 0:00:33.253 ****** 2025-09-29 06:02:53.445449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6be24fb8-e256-5721-a6a2-6a7f57bf9910'}}) 2025-09-29 06:02:53.445453 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed2553fc-8d98-5289-a275-720d5101f8b0'}}) 2025-09-29 06:02:53.445457 | orchestrator | 2025-09-29 06:02:53.445461 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-29 06:02:53.445464 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.119) 0:00:33.373 ****** 2025-09-29 06:02:53.445469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6be24fb8-e256-5721-a6a2-6a7f57bf9910'}})  2025-09-29 06:02:53.445474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed2553fc-8d98-5289-a275-720d5101f8b0'}})  2025-09-29 06:02:53.445478 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445481 | orchestrator | 2025-09-29 06:02:53.445485 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-29 06:02:53.445489 | orchestrator | Monday 29 September 2025 06:02:50 +0000 (0:00:00.113) 0:00:33.486 ****** 2025-09-29 06:02:53.445493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6be24fb8-e256-5721-a6a2-6a7f57bf9910'}})  2025-09-29 06:02:53.445519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed2553fc-8d98-5289-a275-720d5101f8b0'}})  2025-09-29 06:02:53.445523 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445527 | orchestrator | 2025-09-29 06:02:53.445531 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-29 06:02:53.445535 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.135) 0:00:33.622 ****** 2025-09-29 06:02:53.445539 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6be24fb8-e256-5721-a6a2-6a7f57bf9910'}})  2025-09-29 06:02:53.445555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed2553fc-8d98-5289-a275-720d5101f8b0'}})  2025-09-29 06:02:53.445559 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445563 | orchestrator | 2025-09-29 06:02:53.445566 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-29 06:02:53.445570 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.118) 0:00:33.740 ****** 2025-09-29 06:02:53.445574 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:53.445578 | orchestrator | 2025-09-29 06:02:53.445581 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-29 06:02:53.445585 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.121) 0:00:33.862 ****** 2025-09-29 06:02:53.445589 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:02:53.445592 | orchestrator | 2025-09-29 06:02:53.445596 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-29 06:02:53.445600 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.105) 0:00:33.967 ****** 2025-09-29 06:02:53.445603 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445607 | orchestrator | 2025-09-29 06:02:53.445611 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-29 06:02:53.445614 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.111) 0:00:34.079 ****** 2025-09-29 06:02:53.445618 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445622 | orchestrator | 2025-09-29 06:02:53.445625 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-29 06:02:53.445629 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.134) 0:00:34.214 ****** 2025-09-29 06:02:53.445633 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445637 | orchestrator | 2025-09-29 06:02:53.445640 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-29 06:02:53.445644 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.123) 0:00:34.337 ****** 2025-09-29 06:02:53.445648 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:02:53.445651 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:53.445655 | orchestrator |  "sdb": { 2025-09-29 06:02:53.445659 | orchestrator |  "osd_lvm_uuid": "6be24fb8-e256-5721-a6a2-6a7f57bf9910" 2025-09-29 06:02:53.445678 | orchestrator |  }, 2025-09-29 06:02:53.445682 | orchestrator |  "sdc": { 2025-09-29 06:02:53.445686 | orchestrator |  "osd_lvm_uuid": "ed2553fc-8d98-5289-a275-720d5101f8b0" 2025-09-29 06:02:53.445690 | orchestrator |  } 2025-09-29 06:02:53.445693 | orchestrator |  } 2025-09-29 06:02:53.445698 | orchestrator | } 2025-09-29 06:02:53.445702 | orchestrator | 2025-09-29 06:02:53.445706 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-29 06:02:53.445709 | orchestrator | Monday 29 September 2025 06:02:51 +0000 (0:00:00.116) 0:00:34.453 ****** 2025-09-29 06:02:53.445713 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445717 | orchestrator | 2025-09-29 06:02:53.445721 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-29 06:02:53.445724 | orchestrator | Monday 29 September 2025 06:02:52 +0000 (0:00:00.109) 0:00:34.563 ****** 2025-09-29 06:02:53.445728 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445732 | orchestrator | 2025-09-29 06:02:53.445736 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-29 06:02:53.445743 | orchestrator | Monday 29 September 2025 06:02:52 +0000 (0:00:00.234) 0:00:34.797 ****** 2025-09-29 06:02:53.445746 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:02:53.445750 | orchestrator | 2025-09-29 06:02:53.445754 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-29 06:02:53.445758 | orchestrator | Monday 29 September 2025 06:02:52 +0000 (0:00:00.119) 0:00:34.916 ****** 2025-09-29 06:02:53.445761 | orchestrator | changed: [testbed-node-5] => { 2025-09-29 06:02:53.445765 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-29 06:02:53.445769 | orchestrator |  "ceph_osd_devices": { 2025-09-29 06:02:53.445773 | orchestrator |  "sdb": { 2025-09-29 06:02:53.445776 | orchestrator |  "osd_lvm_uuid": "6be24fb8-e256-5721-a6a2-6a7f57bf9910" 2025-09-29 06:02:53.445780 | orchestrator |  }, 2025-09-29 06:02:53.445784 | orchestrator |  "sdc": { 2025-09-29 06:02:53.445788 | orchestrator |  "osd_lvm_uuid": "ed2553fc-8d98-5289-a275-720d5101f8b0" 2025-09-29 06:02:53.445791 | orchestrator |  } 2025-09-29 06:02:53.445795 | orchestrator |  }, 2025-09-29 06:02:53.445799 | orchestrator |  "lvm_volumes": [ 2025-09-29 06:02:53.445803 | orchestrator |  { 2025-09-29 06:02:53.445835 | orchestrator |  "data": "osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910", 2025-09-29 06:02:53.445839 | orchestrator |  "data_vg": "ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910" 2025-09-29 06:02:53.445843 | orchestrator |  }, 2025-09-29 06:02:53.445847 | orchestrator |  { 2025-09-29 06:02:53.445850 | orchestrator |  "data": "osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0", 2025-09-29 06:02:53.445854 | orchestrator |  "data_vg": "ceph-ed2553fc-8d98-5289-a275-720d5101f8b0" 2025-09-29 06:02:53.445858 | orchestrator |  } 2025-09-29 06:02:53.445862 | orchestrator |  ] 2025-09-29 06:02:53.445865 | orchestrator |  } 2025-09-29 06:02:53.445872 | orchestrator | } 2025-09-29 06:02:53.445876 | orchestrator | 2025-09-29 06:02:53.445880 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-29 06:02:53.445883 | orchestrator | Monday 29 September 2025 06:02:52 +0000 (0:00:00.207) 0:00:35.124 ****** 2025-09-29 06:02:53.445887 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-29 06:02:53.445891 | orchestrator | 2025-09-29 06:02:53.445894 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:02:53.445898 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:02:53.445903 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:02:53.445907 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:02:53.445911 | orchestrator | 2025-09-29 06:02:53.445915 | orchestrator | 2025-09-29 06:02:53.445918 | orchestrator | 2025-09-29 06:02:53.445922 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:02:53.445926 | orchestrator | Monday 29 September 2025 06:02:53 +0000 (0:00:00.861) 0:00:35.985 ****** 2025-09-29 06:02:53.445929 | orchestrator | =============================================================================== 2025-09-29 06:02:53.445933 | orchestrator | Write configuration file ------------------------------------------------ 3.41s 2025-09-29 06:02:53.445937 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-29 06:02:53.445940 | orchestrator | Add known partitions to the list of available block devices ------------- 0.96s 2025-09-29 06:02:53.445944 | orchestrator | Get initial list of available block devices ----------------------------- 0.93s 2025-09-29 06:02:53.445948 | orchestrator | Add known links to the list of available block devices ------------------ 0.87s 2025-09-29 06:02:53.445955 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-29 06:02:53.445959 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2025-09-29 06:02:53.445962 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-29 06:02:53.445966 | orchestrator | Print configuration data ------------------------------------------------ 0.70s 2025-09-29 06:02:53.445970 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-09-29 06:02:53.445973 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-29 06:02:53.445977 | orchestrator | Set WAL devices config data --------------------------------------------- 0.60s 2025-09-29 06:02:53.445981 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-09-29 06:02:53.445984 | orchestrator | Add known partitions to the list of available block devices ------------- 0.55s 2025-09-29 06:02:53.445992 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.54s 2025-09-29 06:02:53.652613 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-09-29 06:02:53.652734 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.53s 2025-09-29 06:02:53.652750 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2025-09-29 06:02:53.652760 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2025-09-29 06:02:53.652770 | orchestrator | Print DB devices -------------------------------------------------------- 0.47s 2025-09-29 06:03:16.250081 | orchestrator | 2025-09-29 06:03:16 | INFO  | Task bf7a9ff4-06d3-45f2-8f76-aa2e24487d23 (sync inventory) is running in background. Output coming soon. 2025-09-29 06:03:40.214973 | orchestrator | 2025-09-29 06:03:17 | INFO  | Starting group_vars file reorganization 2025-09-29 06:03:40.215057 | orchestrator | 2025-09-29 06:03:17 | INFO  | Moved 0 file(s) to their respective directories 2025-09-29 06:03:40.215070 | orchestrator | 2025-09-29 06:03:17 | INFO  | Group_vars file reorganization completed 2025-09-29 06:03:40.215080 | orchestrator | 2025-09-29 06:03:19 | INFO  | Starting variable preparation from inventory 2025-09-29 06:03:40.215091 | orchestrator | 2025-09-29 06:03:23 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-29 06:03:40.215101 | orchestrator | 2025-09-29 06:03:23 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-29 06:03:40.215110 | orchestrator | 2025-09-29 06:03:23 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-29 06:03:40.215139 | orchestrator | 2025-09-29 06:03:23 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-29 06:03:40.215149 | orchestrator | 2025-09-29 06:03:23 | INFO  | Variable preparation completed 2025-09-29 06:03:40.215159 | orchestrator | 2025-09-29 06:03:24 | INFO  | Starting inventory overwrite handling 2025-09-29 06:03:40.215169 | orchestrator | 2025-09-29 06:03:24 | INFO  | Handling group overwrites in 99-overwrite 2025-09-29 06:03:40.215184 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group frr:children from 60-generic 2025-09-29 06:03:40.215194 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group storage:children from 50-kolla 2025-09-29 06:03:40.215203 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group netbird:children from 50-infrastructure 2025-09-29 06:03:40.215213 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-29 06:03:40.215223 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-29 06:03:40.215232 | orchestrator | 2025-09-29 06:03:24 | INFO  | Handling group overwrites in 20-roles 2025-09-29 06:03:40.215242 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removing group k3s_node from 50-infrastructure 2025-09-29 06:03:40.215274 | orchestrator | 2025-09-29 06:03:24 | INFO  | Removed 6 group(s) in total 2025-09-29 06:03:40.215284 | orchestrator | 2025-09-29 06:03:24 | INFO  | Inventory overwrite handling completed 2025-09-29 06:03:40.215294 | orchestrator | 2025-09-29 06:03:25 | INFO  | Starting merge of inventory files 2025-09-29 06:03:40.215303 | orchestrator | 2025-09-29 06:03:25 | INFO  | Inventory files merged successfully 2025-09-29 06:03:40.215312 | orchestrator | 2025-09-29 06:03:29 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-29 06:03:40.215322 | orchestrator | 2025-09-29 06:03:38 | INFO  | Successfully wrote ClusterShell configuration 2025-09-29 06:03:40.215332 | orchestrator | [master 7518dc8] 2025-09-29-06-03 2025-09-29 06:03:40.215343 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-29 06:03:42.091244 | orchestrator | 2025-09-29 06:03:42 | INFO  | Task 71bf0218-ff7e-4558-b20e-5fcbe650b6db (ceph-create-lvm-devices) was prepared for execution. 2025-09-29 06:03:42.091365 | orchestrator | 2025-09-29 06:03:42 | INFO  | It takes a moment until task 71bf0218-ff7e-4558-b20e-5fcbe650b6db (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-29 06:03:52.680835 | orchestrator | 2025-09-29 06:03:52.680965 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-29 06:03:52.680980 | orchestrator | 2025-09-29 06:03:52.680991 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:03:52.681002 | orchestrator | Monday 29 September 2025 06:03:46 +0000 (0:00:00.326) 0:00:00.326 ****** 2025-09-29 06:03:52.681012 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:03:52.681022 | orchestrator | 2025-09-29 06:03:52.681032 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:03:52.681049 | orchestrator | Monday 29 September 2025 06:03:46 +0000 (0:00:00.231) 0:00:00.557 ****** 2025-09-29 06:03:52.681061 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:03:52.681072 | orchestrator | 2025-09-29 06:03:52.681082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681092 | orchestrator | Monday 29 September 2025 06:03:46 +0000 (0:00:00.225) 0:00:00.782 ****** 2025-09-29 06:03:52.681102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-29 06:03:52.681113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-29 06:03:52.681122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-29 06:03:52.681132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-29 06:03:52.681142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-29 06:03:52.681152 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-29 06:03:52.681161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-29 06:03:52.681171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-29 06:03:52.681180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-29 06:03:52.681190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-29 06:03:52.681200 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-29 06:03:52.681209 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-29 06:03:52.681219 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-29 06:03:52.681229 | orchestrator | 2025-09-29 06:03:52.681238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681276 | orchestrator | Monday 29 September 2025 06:03:46 +0000 (0:00:00.365) 0:00:01.148 ****** 2025-09-29 06:03:52.681295 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681313 | orchestrator | 2025-09-29 06:03:52.681327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681343 | orchestrator | Monday 29 September 2025 06:03:47 +0000 (0:00:00.348) 0:00:01.496 ****** 2025-09-29 06:03:52.681358 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681373 | orchestrator | 2025-09-29 06:03:52.681387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681401 | orchestrator | Monday 29 September 2025 06:03:47 +0000 (0:00:00.198) 0:00:01.694 ****** 2025-09-29 06:03:52.681415 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681431 | orchestrator | 2025-09-29 06:03:52.681447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681464 | orchestrator | Monday 29 September 2025 06:03:47 +0000 (0:00:00.181) 0:00:01.876 ****** 2025-09-29 06:03:52.681481 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681498 | orchestrator | 2025-09-29 06:03:52.681516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681533 | orchestrator | Monday 29 September 2025 06:03:47 +0000 (0:00:00.179) 0:00:02.055 ****** 2025-09-29 06:03:52.681547 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681559 | orchestrator | 2025-09-29 06:03:52.681570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681581 | orchestrator | Monday 29 September 2025 06:03:48 +0000 (0:00:00.180) 0:00:02.236 ****** 2025-09-29 06:03:52.681592 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681603 | orchestrator | 2025-09-29 06:03:52.681615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681627 | orchestrator | Monday 29 September 2025 06:03:48 +0000 (0:00:00.180) 0:00:02.416 ****** 2025-09-29 06:03:52.681638 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681649 | orchestrator | 2025-09-29 06:03:52.681661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681672 | orchestrator | Monday 29 September 2025 06:03:48 +0000 (0:00:00.173) 0:00:02.590 ****** 2025-09-29 06:03:52.681684 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.681695 | orchestrator | 2025-09-29 06:03:52.681705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681716 | orchestrator | Monday 29 September 2025 06:03:48 +0000 (0:00:00.177) 0:00:02.768 ****** 2025-09-29 06:03:52.681728 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4) 2025-09-29 06:03:52.681739 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4) 2025-09-29 06:03:52.681749 | orchestrator | 2025-09-29 06:03:52.681758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681768 | orchestrator | Monday 29 September 2025 06:03:48 +0000 (0:00:00.363) 0:00:03.131 ****** 2025-09-29 06:03:52.681817 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc) 2025-09-29 06:03:52.681829 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc) 2025-09-29 06:03:52.681839 | orchestrator | 2025-09-29 06:03:52.681848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681858 | orchestrator | Monday 29 September 2025 06:03:49 +0000 (0:00:00.413) 0:00:03.545 ****** 2025-09-29 06:03:52.681867 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f) 2025-09-29 06:03:52.681877 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f) 2025-09-29 06:03:52.681886 | orchestrator | 2025-09-29 06:03:52.681896 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681915 | orchestrator | Monday 29 September 2025 06:03:49 +0000 (0:00:00.489) 0:00:04.034 ****** 2025-09-29 06:03:52.681925 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a) 2025-09-29 06:03:52.681935 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a) 2025-09-29 06:03:52.681944 | orchestrator | 2025-09-29 06:03:52.681953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:03:52.681963 | orchestrator | Monday 29 September 2025 06:03:50 +0000 (0:00:00.642) 0:00:04.677 ****** 2025-09-29 06:03:52.681972 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:03:52.681982 | orchestrator | 2025-09-29 06:03:52.681991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682001 | orchestrator | Monday 29 September 2025 06:03:50 +0000 (0:00:00.306) 0:00:04.983 ****** 2025-09-29 06:03:52.682010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-29 06:03:52.682100 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-29 06:03:52.682112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-29 06:03:52.682122 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-29 06:03:52.682147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-29 06:03:52.682157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-29 06:03:52.682167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-29 06:03:52.682177 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-29 06:03:52.682186 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-29 06:03:52.682196 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-29 06:03:52.682205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-29 06:03:52.682214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-29 06:03:52.682228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-29 06:03:52.682238 | orchestrator | 2025-09-29 06:03:52.682248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682258 | orchestrator | Monday 29 September 2025 06:03:51 +0000 (0:00:00.401) 0:00:05.384 ****** 2025-09-29 06:03:52.682267 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682277 | orchestrator | 2025-09-29 06:03:52.682286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682296 | orchestrator | Monday 29 September 2025 06:03:51 +0000 (0:00:00.189) 0:00:05.573 ****** 2025-09-29 06:03:52.682306 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682315 | orchestrator | 2025-09-29 06:03:52.682325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682334 | orchestrator | Monday 29 September 2025 06:03:51 +0000 (0:00:00.186) 0:00:05.759 ****** 2025-09-29 06:03:52.682343 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682353 | orchestrator | 2025-09-29 06:03:52.682363 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682372 | orchestrator | Monday 29 September 2025 06:03:51 +0000 (0:00:00.174) 0:00:05.934 ****** 2025-09-29 06:03:52.682382 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682391 | orchestrator | 2025-09-29 06:03:52.682400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682417 | orchestrator | Monday 29 September 2025 06:03:51 +0000 (0:00:00.198) 0:00:06.133 ****** 2025-09-29 06:03:52.682426 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682436 | orchestrator | 2025-09-29 06:03:52.682445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682455 | orchestrator | Monday 29 September 2025 06:03:52 +0000 (0:00:00.179) 0:00:06.312 ****** 2025-09-29 06:03:52.682464 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682474 | orchestrator | 2025-09-29 06:03:52.682483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682493 | orchestrator | Monday 29 September 2025 06:03:52 +0000 (0:00:00.180) 0:00:06.492 ****** 2025-09-29 06:03:52.682502 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:03:52.682511 | orchestrator | 2025-09-29 06:03:52.682521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:03:52.682530 | orchestrator | Monday 29 September 2025 06:03:52 +0000 (0:00:00.175) 0:00:06.668 ****** 2025-09-29 06:03:52.682547 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.575833 | orchestrator | 2025-09-29 06:04:00.575932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:00.575946 | orchestrator | Monday 29 September 2025 06:03:52 +0000 (0:00:00.193) 0:00:06.862 ****** 2025-09-29 06:04:00.575956 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-29 06:04:00.575967 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-29 06:04:00.575976 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-29 06:04:00.575985 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-29 06:04:00.575994 | orchestrator | 2025-09-29 06:04:00.576003 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:00.576012 | orchestrator | Monday 29 September 2025 06:03:53 +0000 (0:00:00.986) 0:00:07.849 ****** 2025-09-29 06:04:00.576021 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576029 | orchestrator | 2025-09-29 06:04:00.576038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:00.576047 | orchestrator | Monday 29 September 2025 06:03:53 +0000 (0:00:00.200) 0:00:08.049 ****** 2025-09-29 06:04:00.576055 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576064 | orchestrator | 2025-09-29 06:04:00.576073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:00.576081 | orchestrator | Monday 29 September 2025 06:03:54 +0000 (0:00:00.181) 0:00:08.231 ****** 2025-09-29 06:04:00.576090 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576098 | orchestrator | 2025-09-29 06:04:00.576108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:00.576116 | orchestrator | Monday 29 September 2025 06:03:54 +0000 (0:00:00.186) 0:00:08.417 ****** 2025-09-29 06:04:00.576125 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576134 | orchestrator | 2025-09-29 06:04:00.576142 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-29 06:04:00.576151 | orchestrator | Monday 29 September 2025 06:03:54 +0000 (0:00:00.210) 0:00:08.628 ****** 2025-09-29 06:04:00.576159 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576168 | orchestrator | 2025-09-29 06:04:00.576177 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-29 06:04:00.576185 | orchestrator | Monday 29 September 2025 06:03:54 +0000 (0:00:00.126) 0:00:08.754 ****** 2025-09-29 06:04:00.576194 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'da34c784-00a3-5dad-8c50-6eedba006e78'}}) 2025-09-29 06:04:00.576203 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5b44ac90-f026-5081-896e-3232400f6176'}}) 2025-09-29 06:04:00.576212 | orchestrator | 2025-09-29 06:04:00.576221 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-29 06:04:00.576229 | orchestrator | Monday 29 September 2025 06:03:54 +0000 (0:00:00.164) 0:00:08.918 ****** 2025-09-29 06:04:00.576239 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'}) 2025-09-29 06:04:00.576266 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'}) 2025-09-29 06:04:00.576275 | orchestrator | 2025-09-29 06:04:00.576284 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-29 06:04:00.576293 | orchestrator | Monday 29 September 2025 06:03:56 +0000 (0:00:01.980) 0:00:10.898 ****** 2025-09-29 06:04:00.576302 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576320 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576329 | orchestrator | 2025-09-29 06:04:00.576337 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-29 06:04:00.576346 | orchestrator | Monday 29 September 2025 06:03:56 +0000 (0:00:00.133) 0:00:11.032 ****** 2025-09-29 06:04:00.576357 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'}) 2025-09-29 06:04:00.576367 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'}) 2025-09-29 06:04:00.576379 | orchestrator | 2025-09-29 06:04:00.576389 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-29 06:04:00.576400 | orchestrator | Monday 29 September 2025 06:03:58 +0000 (0:00:01.376) 0:00:12.409 ****** 2025-09-29 06:04:00.576410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576421 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576432 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576440 | orchestrator | 2025-09-29 06:04:00.576449 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-29 06:04:00.576457 | orchestrator | Monday 29 September 2025 06:03:58 +0000 (0:00:00.169) 0:00:12.579 ****** 2025-09-29 06:04:00.576466 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576474 | orchestrator | 2025-09-29 06:04:00.576483 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-29 06:04:00.576506 | orchestrator | Monday 29 September 2025 06:03:58 +0000 (0:00:00.166) 0:00:12.745 ****** 2025-09-29 06:04:00.576515 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576524 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576533 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576541 | orchestrator | 2025-09-29 06:04:00.576550 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-29 06:04:00.576558 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.447) 0:00:13.193 ****** 2025-09-29 06:04:00.576567 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576575 | orchestrator | 2025-09-29 06:04:00.576584 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-29 06:04:00.576592 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.168) 0:00:13.361 ****** 2025-09-29 06:04:00.576601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576625 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576634 | orchestrator | 2025-09-29 06:04:00.576642 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-29 06:04:00.576651 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.183) 0:00:13.545 ****** 2025-09-29 06:04:00.576659 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576667 | orchestrator | 2025-09-29 06:04:00.576676 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-29 06:04:00.576684 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.139) 0:00:13.684 ****** 2025-09-29 06:04:00.576693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576701 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576710 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576719 | orchestrator | 2025-09-29 06:04:00.576727 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-29 06:04:00.576736 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.185) 0:00:13.870 ****** 2025-09-29 06:04:00.576745 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:00.576753 | orchestrator | 2025-09-29 06:04:00.576762 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-29 06:04:00.576770 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.136) 0:00:14.006 ****** 2025-09-29 06:04:00.576854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576875 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576884 | orchestrator | 2025-09-29 06:04:00.576892 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-29 06:04:00.576901 | orchestrator | Monday 29 September 2025 06:03:59 +0000 (0:00:00.153) 0:00:14.160 ****** 2025-09-29 06:04:00.576909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576926 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576935 | orchestrator | 2025-09-29 06:04:00.576943 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-29 06:04:00.576952 | orchestrator | Monday 29 September 2025 06:04:00 +0000 (0:00:00.181) 0:00:14.341 ****** 2025-09-29 06:04:00.576960 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:00.576969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:00.576977 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.576986 | orchestrator | 2025-09-29 06:04:00.576994 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-29 06:04:00.577003 | orchestrator | Monday 29 September 2025 06:04:00 +0000 (0:00:00.139) 0:00:14.481 ****** 2025-09-29 06:04:00.577011 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.577026 | orchestrator | 2025-09-29 06:04:00.577035 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-29 06:04:00.577044 | orchestrator | Monday 29 September 2025 06:04:00 +0000 (0:00:00.134) 0:00:14.615 ****** 2025-09-29 06:04:00.577052 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:00.577060 | orchestrator | 2025-09-29 06:04:00.577075 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-29 06:04:06.307607 | orchestrator | Monday 29 September 2025 06:04:00 +0000 (0:00:00.141) 0:00:14.757 ****** 2025-09-29 06:04:06.307714 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.307730 | orchestrator | 2025-09-29 06:04:06.307743 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-29 06:04:06.307755 | orchestrator | Monday 29 September 2025 06:04:00 +0000 (0:00:00.136) 0:00:14.893 ****** 2025-09-29 06:04:06.307766 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:04:06.307777 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-29 06:04:06.307788 | orchestrator | } 2025-09-29 06:04:06.307868 | orchestrator | 2025-09-29 06:04:06.307881 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-29 06:04:06.307892 | orchestrator | Monday 29 September 2025 06:04:01 +0000 (0:00:00.384) 0:00:15.278 ****** 2025-09-29 06:04:06.307904 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:04:06.307915 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-29 06:04:06.307925 | orchestrator | } 2025-09-29 06:04:06.307936 | orchestrator | 2025-09-29 06:04:06.307947 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-29 06:04:06.307958 | orchestrator | Monday 29 September 2025 06:04:01 +0000 (0:00:00.142) 0:00:15.420 ****** 2025-09-29 06:04:06.307969 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:04:06.307980 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-29 06:04:06.307991 | orchestrator | } 2025-09-29 06:04:06.308003 | orchestrator | 2025-09-29 06:04:06.308015 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-29 06:04:06.308026 | orchestrator | Monday 29 September 2025 06:04:01 +0000 (0:00:00.129) 0:00:15.550 ****** 2025-09-29 06:04:06.308037 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:06.308048 | orchestrator | 2025-09-29 06:04:06.308059 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-29 06:04:06.308070 | orchestrator | Monday 29 September 2025 06:04:02 +0000 (0:00:00.657) 0:00:16.207 ****** 2025-09-29 06:04:06.308081 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:06.308092 | orchestrator | 2025-09-29 06:04:06.308103 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-29 06:04:06.308114 | orchestrator | Monday 29 September 2025 06:04:02 +0000 (0:00:00.513) 0:00:16.721 ****** 2025-09-29 06:04:06.308125 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:06.308136 | orchestrator | 2025-09-29 06:04:06.308147 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-29 06:04:06.308158 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.485) 0:00:17.206 ****** 2025-09-29 06:04:06.308169 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:06.308179 | orchestrator | 2025-09-29 06:04:06.308191 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-29 06:04:06.308201 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.129) 0:00:17.335 ****** 2025-09-29 06:04:06.308212 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308223 | orchestrator | 2025-09-29 06:04:06.308234 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-29 06:04:06.308245 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.099) 0:00:17.435 ****** 2025-09-29 06:04:06.308256 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308267 | orchestrator | 2025-09-29 06:04:06.308278 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-29 06:04:06.308289 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.106) 0:00:17.541 ****** 2025-09-29 06:04:06.308323 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:04:06.308335 | orchestrator |  "vgs_report": { 2025-09-29 06:04:06.308359 | orchestrator |  "vg": [] 2025-09-29 06:04:06.308371 | orchestrator |  } 2025-09-29 06:04:06.308382 | orchestrator | } 2025-09-29 06:04:06.308393 | orchestrator | 2025-09-29 06:04:06.308404 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-29 06:04:06.308415 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.147) 0:00:17.688 ****** 2025-09-29 06:04:06.308425 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308436 | orchestrator | 2025-09-29 06:04:06.308447 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-29 06:04:06.308458 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.105) 0:00:17.793 ****** 2025-09-29 06:04:06.308469 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308479 | orchestrator | 2025-09-29 06:04:06.308490 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-29 06:04:06.308501 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.118) 0:00:17.912 ****** 2025-09-29 06:04:06.308512 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308522 | orchestrator | 2025-09-29 06:04:06.308533 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-29 06:04:06.308544 | orchestrator | Monday 29 September 2025 06:04:03 +0000 (0:00:00.239) 0:00:18.151 ****** 2025-09-29 06:04:06.308554 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308565 | orchestrator | 2025-09-29 06:04:06.308576 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-29 06:04:06.308587 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.120) 0:00:18.271 ****** 2025-09-29 06:04:06.308597 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308608 | orchestrator | 2025-09-29 06:04:06.308619 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-29 06:04:06.308630 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.118) 0:00:18.390 ****** 2025-09-29 06:04:06.308641 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308651 | orchestrator | 2025-09-29 06:04:06.308662 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-29 06:04:06.308673 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.117) 0:00:18.508 ****** 2025-09-29 06:04:06.308683 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308694 | orchestrator | 2025-09-29 06:04:06.308705 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-29 06:04:06.308716 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.130) 0:00:18.638 ****** 2025-09-29 06:04:06.308727 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308737 | orchestrator | 2025-09-29 06:04:06.308748 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-29 06:04:06.308776 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.115) 0:00:18.753 ****** 2025-09-29 06:04:06.308787 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308817 | orchestrator | 2025-09-29 06:04:06.308828 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-29 06:04:06.308839 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.126) 0:00:18.879 ****** 2025-09-29 06:04:06.308849 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308860 | orchestrator | 2025-09-29 06:04:06.308871 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-29 06:04:06.308882 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.139) 0:00:19.019 ****** 2025-09-29 06:04:06.308892 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308903 | orchestrator | 2025-09-29 06:04:06.308914 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-29 06:04:06.308925 | orchestrator | Monday 29 September 2025 06:04:04 +0000 (0:00:00.120) 0:00:19.139 ****** 2025-09-29 06:04:06.308936 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308947 | orchestrator | 2025-09-29 06:04:06.308965 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-29 06:04:06.308976 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.117) 0:00:19.257 ****** 2025-09-29 06:04:06.308987 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.308998 | orchestrator | 2025-09-29 06:04:06.309009 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-29 06:04:06.309020 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.115) 0:00:19.372 ****** 2025-09-29 06:04:06.309031 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309042 | orchestrator | 2025-09-29 06:04:06.309052 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-29 06:04:06.309063 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.133) 0:00:19.506 ****** 2025-09-29 06:04:06.309076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309088 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:06.309099 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309110 | orchestrator | 2025-09-29 06:04:06.309121 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-29 06:04:06.309132 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.286) 0:00:19.792 ****** 2025-09-29 06:04:06.309143 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:06.309164 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309175 | orchestrator | 2025-09-29 06:04:06.309186 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-29 06:04:06.309197 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.137) 0:00:19.930 ****** 2025-09-29 06:04:06.309208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309219 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:06.309230 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309241 | orchestrator | 2025-09-29 06:04:06.309251 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-29 06:04:06.309262 | orchestrator | Monday 29 September 2025 06:04:05 +0000 (0:00:00.135) 0:00:20.065 ****** 2025-09-29 06:04:06.309273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:06.309295 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309306 | orchestrator | 2025-09-29 06:04:06.309317 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-29 06:04:06.309327 | orchestrator | Monday 29 September 2025 06:04:06 +0000 (0:00:00.139) 0:00:20.205 ****** 2025-09-29 06:04:06.309338 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:06.309360 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:06.309377 | orchestrator | 2025-09-29 06:04:06.309388 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-29 06:04:06.309399 | orchestrator | Monday 29 September 2025 06:04:06 +0000 (0:00:00.146) 0:00:20.351 ****** 2025-09-29 06:04:06.309417 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:06.309434 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.152857 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.152984 | orchestrator | 2025-09-29 06:04:11.153001 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-29 06:04:11.153015 | orchestrator | Monday 29 September 2025 06:04:06 +0000 (0:00:00.139) 0:00:20.491 ****** 2025-09-29 06:04:11.153027 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:11.153039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.153064 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.154226 | orchestrator | 2025-09-29 06:04:11.154312 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-29 06:04:11.154328 | orchestrator | Monday 29 September 2025 06:04:06 +0000 (0:00:00.141) 0:00:20.632 ****** 2025-09-29 06:04:11.154341 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:11.154354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.154365 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.154377 | orchestrator | 2025-09-29 06:04:11.154388 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-29 06:04:11.154399 | orchestrator | Monday 29 September 2025 06:04:06 +0000 (0:00:00.144) 0:00:20.777 ****** 2025-09-29 06:04:11.154410 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:11.154422 | orchestrator | 2025-09-29 06:04:11.154432 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-29 06:04:11.154443 | orchestrator | Monday 29 September 2025 06:04:07 +0000 (0:00:00.477) 0:00:21.255 ****** 2025-09-29 06:04:11.154454 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:11.154464 | orchestrator | 2025-09-29 06:04:11.154475 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-29 06:04:11.154486 | orchestrator | Monday 29 September 2025 06:04:07 +0000 (0:00:00.489) 0:00:21.744 ****** 2025-09-29 06:04:11.154496 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:04:11.154507 | orchestrator | 2025-09-29 06:04:11.154518 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-29 06:04:11.154528 | orchestrator | Monday 29 September 2025 06:04:07 +0000 (0:00:00.130) 0:00:21.875 ****** 2025-09-29 06:04:11.154539 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'vg_name': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'}) 2025-09-29 06:04:11.154552 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'vg_name': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'}) 2025-09-29 06:04:11.154563 | orchestrator | 2025-09-29 06:04:11.154593 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-29 06:04:11.154605 | orchestrator | Monday 29 September 2025 06:04:07 +0000 (0:00:00.143) 0:00:22.018 ****** 2025-09-29 06:04:11.154616 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:11.154652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.154663 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.154674 | orchestrator | 2025-09-29 06:04:11.154685 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-29 06:04:11.154696 | orchestrator | Monday 29 September 2025 06:04:08 +0000 (0:00:00.278) 0:00:22.296 ****** 2025-09-29 06:04:11.154706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:11.154717 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.154728 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.154739 | orchestrator | 2025-09-29 06:04:11.154750 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-29 06:04:11.154760 | orchestrator | Monday 29 September 2025 06:04:08 +0000 (0:00:00.163) 0:00:22.460 ****** 2025-09-29 06:04:11.154771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'})  2025-09-29 06:04:11.154783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'})  2025-09-29 06:04:11.154816 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:04:11.154828 | orchestrator | 2025-09-29 06:04:11.154839 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-29 06:04:11.154925 | orchestrator | Monday 29 September 2025 06:04:08 +0000 (0:00:00.136) 0:00:22.597 ****** 2025-09-29 06:04:11.154938 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:04:11.154949 | orchestrator |  "lvm_report": { 2025-09-29 06:04:11.154960 | orchestrator |  "lv": [ 2025-09-29 06:04:11.154971 | orchestrator |  { 2025-09-29 06:04:11.155009 | orchestrator |  "lv_name": "osd-block-5b44ac90-f026-5081-896e-3232400f6176", 2025-09-29 06:04:11.155022 | orchestrator |  "vg_name": "ceph-5b44ac90-f026-5081-896e-3232400f6176" 2025-09-29 06:04:11.155032 | orchestrator |  }, 2025-09-29 06:04:11.155043 | orchestrator |  { 2025-09-29 06:04:11.155054 | orchestrator |  "lv_name": "osd-block-da34c784-00a3-5dad-8c50-6eedba006e78", 2025-09-29 06:04:11.155065 | orchestrator |  "vg_name": "ceph-da34c784-00a3-5dad-8c50-6eedba006e78" 2025-09-29 06:04:11.155075 | orchestrator |  } 2025-09-29 06:04:11.155086 | orchestrator |  ], 2025-09-29 06:04:11.155096 | orchestrator |  "pv": [ 2025-09-29 06:04:11.155107 | orchestrator |  { 2025-09-29 06:04:11.155118 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-29 06:04:11.155129 | orchestrator |  "vg_name": "ceph-da34c784-00a3-5dad-8c50-6eedba006e78" 2025-09-29 06:04:11.155139 | orchestrator |  }, 2025-09-29 06:04:11.155150 | orchestrator |  { 2025-09-29 06:04:11.155160 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-29 06:04:11.155171 | orchestrator |  "vg_name": "ceph-5b44ac90-f026-5081-896e-3232400f6176" 2025-09-29 06:04:11.155182 | orchestrator |  } 2025-09-29 06:04:11.155192 | orchestrator |  ] 2025-09-29 06:04:11.155203 | orchestrator |  } 2025-09-29 06:04:11.155214 | orchestrator | } 2025-09-29 06:04:11.155225 | orchestrator | 2025-09-29 06:04:11.155236 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-29 06:04:11.155247 | orchestrator | 2025-09-29 06:04:11.155257 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:04:11.155268 | orchestrator | Monday 29 September 2025 06:04:08 +0000 (0:00:00.258) 0:00:22.855 ****** 2025-09-29 06:04:11.155279 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-29 06:04:11.155300 | orchestrator | 2025-09-29 06:04:11.155311 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:04:11.155322 | orchestrator | Monday 29 September 2025 06:04:08 +0000 (0:00:00.259) 0:00:23.115 ****** 2025-09-29 06:04:11.155332 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:11.155343 | orchestrator | 2025-09-29 06:04:11.155354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155365 | orchestrator | Monday 29 September 2025 06:04:09 +0000 (0:00:00.208) 0:00:23.324 ****** 2025-09-29 06:04:11.155375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-29 06:04:11.155386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-29 06:04:11.155397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-29 06:04:11.155407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-29 06:04:11.155418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-29 06:04:11.155429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-29 06:04:11.155439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-29 06:04:11.155457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-29 06:04:11.155468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-29 06:04:11.155479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-29 06:04:11.155490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-29 06:04:11.155500 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-29 06:04:11.155511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-29 06:04:11.155522 | orchestrator | 2025-09-29 06:04:11.155532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155543 | orchestrator | Monday 29 September 2025 06:04:09 +0000 (0:00:00.425) 0:00:23.749 ****** 2025-09-29 06:04:11.155553 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155564 | orchestrator | 2025-09-29 06:04:11.155575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155586 | orchestrator | Monday 29 September 2025 06:04:09 +0000 (0:00:00.191) 0:00:23.940 ****** 2025-09-29 06:04:11.155596 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155607 | orchestrator | 2025-09-29 06:04:11.155618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155629 | orchestrator | Monday 29 September 2025 06:04:09 +0000 (0:00:00.215) 0:00:24.156 ****** 2025-09-29 06:04:11.155640 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155650 | orchestrator | 2025-09-29 06:04:11.155661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155672 | orchestrator | Monday 29 September 2025 06:04:10 +0000 (0:00:00.504) 0:00:24.661 ****** 2025-09-29 06:04:11.155682 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155693 | orchestrator | 2025-09-29 06:04:11.155704 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155715 | orchestrator | Monday 29 September 2025 06:04:10 +0000 (0:00:00.194) 0:00:24.855 ****** 2025-09-29 06:04:11.155725 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155736 | orchestrator | 2025-09-29 06:04:11.155747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155758 | orchestrator | Monday 29 September 2025 06:04:10 +0000 (0:00:00.157) 0:00:25.013 ****** 2025-09-29 06:04:11.155768 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155779 | orchestrator | 2025-09-29 06:04:11.155813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:11.155825 | orchestrator | Monday 29 September 2025 06:04:10 +0000 (0:00:00.163) 0:00:25.176 ****** 2025-09-29 06:04:11.155836 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:11.155847 | orchestrator | 2025-09-29 06:04:11.155866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.085884 | orchestrator | Monday 29 September 2025 06:04:11 +0000 (0:00:00.159) 0:00:25.336 ****** 2025-09-29 06:04:20.085992 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086007 | orchestrator | 2025-09-29 06:04:20.086069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.086082 | orchestrator | Monday 29 September 2025 06:04:11 +0000 (0:00:00.155) 0:00:25.491 ****** 2025-09-29 06:04:20.086093 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254) 2025-09-29 06:04:20.086106 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254) 2025-09-29 06:04:20.086117 | orchestrator | 2025-09-29 06:04:20.086128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.086139 | orchestrator | Monday 29 September 2025 06:04:11 +0000 (0:00:00.372) 0:00:25.864 ****** 2025-09-29 06:04:20.086149 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495) 2025-09-29 06:04:20.086160 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495) 2025-09-29 06:04:20.086184 | orchestrator | 2025-09-29 06:04:20.086206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.086218 | orchestrator | Monday 29 September 2025 06:04:12 +0000 (0:00:00.356) 0:00:26.221 ****** 2025-09-29 06:04:20.086229 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee) 2025-09-29 06:04:20.086240 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee) 2025-09-29 06:04:20.086250 | orchestrator | 2025-09-29 06:04:20.086261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.086272 | orchestrator | Monday 29 September 2025 06:04:12 +0000 (0:00:00.363) 0:00:26.585 ****** 2025-09-29 06:04:20.086283 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60) 2025-09-29 06:04:20.086294 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60) 2025-09-29 06:04:20.086304 | orchestrator | 2025-09-29 06:04:20.086315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:20.086326 | orchestrator | Monday 29 September 2025 06:04:12 +0000 (0:00:00.380) 0:00:26.965 ****** 2025-09-29 06:04:20.086337 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:04:20.086348 | orchestrator | 2025-09-29 06:04:20.086359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086395 | orchestrator | Monday 29 September 2025 06:04:13 +0000 (0:00:00.277) 0:00:27.243 ****** 2025-09-29 06:04:20.086410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-29 06:04:20.086423 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-29 06:04:20.086436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-29 06:04:20.086448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-29 06:04:20.086461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-29 06:04:20.086473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-29 06:04:20.086502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-29 06:04:20.086536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-29 06:04:20.086550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-29 06:04:20.086563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-29 06:04:20.086576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-29 06:04:20.086588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-29 06:04:20.086600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-29 06:04:20.086613 | orchestrator | 2025-09-29 06:04:20.086626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086638 | orchestrator | Monday 29 September 2025 06:04:13 +0000 (0:00:00.482) 0:00:27.726 ****** 2025-09-29 06:04:20.086651 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086663 | orchestrator | 2025-09-29 06:04:20.086676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086688 | orchestrator | Monday 29 September 2025 06:04:13 +0000 (0:00:00.166) 0:00:27.892 ****** 2025-09-29 06:04:20.086701 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086715 | orchestrator | 2025-09-29 06:04:20.086728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086740 | orchestrator | Monday 29 September 2025 06:04:13 +0000 (0:00:00.158) 0:00:28.050 ****** 2025-09-29 06:04:20.086750 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086761 | orchestrator | 2025-09-29 06:04:20.086772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086782 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.160) 0:00:28.211 ****** 2025-09-29 06:04:20.086814 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086825 | orchestrator | 2025-09-29 06:04:20.086854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086866 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.156) 0:00:28.368 ****** 2025-09-29 06:04:20.086877 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086887 | orchestrator | 2025-09-29 06:04:20.086898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086909 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.160) 0:00:28.529 ****** 2025-09-29 06:04:20.086919 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086930 | orchestrator | 2025-09-29 06:04:20.086941 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086952 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.165) 0:00:28.694 ****** 2025-09-29 06:04:20.086962 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.086973 | orchestrator | 2025-09-29 06:04:20.086984 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.086995 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.165) 0:00:28.860 ****** 2025-09-29 06:04:20.087005 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087016 | orchestrator | 2025-09-29 06:04:20.087027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.087038 | orchestrator | Monday 29 September 2025 06:04:14 +0000 (0:00:00.183) 0:00:29.043 ****** 2025-09-29 06:04:20.087048 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-29 06:04:20.087059 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-29 06:04:20.087070 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-29 06:04:20.087080 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-29 06:04:20.087091 | orchestrator | 2025-09-29 06:04:20.087103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.087113 | orchestrator | Monday 29 September 2025 06:04:15 +0000 (0:00:00.749) 0:00:29.792 ****** 2025-09-29 06:04:20.087132 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087143 | orchestrator | 2025-09-29 06:04:20.087154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.087165 | orchestrator | Monday 29 September 2025 06:04:15 +0000 (0:00:00.187) 0:00:29.980 ****** 2025-09-29 06:04:20.087175 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087186 | orchestrator | 2025-09-29 06:04:20.087197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.087207 | orchestrator | Monday 29 September 2025 06:04:15 +0000 (0:00:00.175) 0:00:30.155 ****** 2025-09-29 06:04:20.087218 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087228 | orchestrator | 2025-09-29 06:04:20.087239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:20.087250 | orchestrator | Monday 29 September 2025 06:04:16 +0000 (0:00:00.437) 0:00:30.593 ****** 2025-09-29 06:04:20.087261 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087271 | orchestrator | 2025-09-29 06:04:20.087282 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-29 06:04:20.087293 | orchestrator | Monday 29 September 2025 06:04:16 +0000 (0:00:00.204) 0:00:30.797 ****** 2025-09-29 06:04:20.087309 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087320 | orchestrator | 2025-09-29 06:04:20.087331 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-29 06:04:20.087342 | orchestrator | Monday 29 September 2025 06:04:16 +0000 (0:00:00.142) 0:00:30.940 ****** 2025-09-29 06:04:20.087352 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}}) 2025-09-29 06:04:20.087364 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46f249ea-6148-566c-bc01-762c6d5847ca'}}) 2025-09-29 06:04:20.087374 | orchestrator | 2025-09-29 06:04:20.087385 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-29 06:04:20.087396 | orchestrator | Monday 29 September 2025 06:04:16 +0000 (0:00:00.192) 0:00:31.132 ****** 2025-09-29 06:04:20.087407 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}) 2025-09-29 06:04:20.087419 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'}) 2025-09-29 06:04:20.087430 | orchestrator | 2025-09-29 06:04:20.087441 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-29 06:04:20.087452 | orchestrator | Monday 29 September 2025 06:04:18 +0000 (0:00:01.766) 0:00:32.898 ****** 2025-09-29 06:04:20.087462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:20.087475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:20.087486 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:20.087496 | orchestrator | 2025-09-29 06:04:20.087507 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-29 06:04:20.087518 | orchestrator | Monday 29 September 2025 06:04:18 +0000 (0:00:00.148) 0:00:33.046 ****** 2025-09-29 06:04:20.087528 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}) 2025-09-29 06:04:20.087539 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'}) 2025-09-29 06:04:20.087550 | orchestrator | 2025-09-29 06:04:20.087567 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-29 06:04:24.866112 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:01.220) 0:00:34.267 ****** 2025-09-29 06:04:24.866203 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866215 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866220 | orchestrator | 2025-09-29 06:04:24.866225 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-29 06:04:24.866229 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.117) 0:00:34.385 ****** 2025-09-29 06:04:24.866233 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866237 | orchestrator | 2025-09-29 06:04:24.866241 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-29 06:04:24.866245 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.130) 0:00:34.516 ****** 2025-09-29 06:04:24.866249 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866256 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866260 | orchestrator | 2025-09-29 06:04:24.866263 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-29 06:04:24.866267 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.141) 0:00:34.657 ****** 2025-09-29 06:04:24.866271 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866274 | orchestrator | 2025-09-29 06:04:24.866278 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-29 06:04:24.866282 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.116) 0:00:34.773 ****** 2025-09-29 06:04:24.866285 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866293 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866297 | orchestrator | 2025-09-29 06:04:24.866300 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-29 06:04:24.866304 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.145) 0:00:34.919 ****** 2025-09-29 06:04:24.866317 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866321 | orchestrator | 2025-09-29 06:04:24.866325 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-29 06:04:24.866328 | orchestrator | Monday 29 September 2025 06:04:20 +0000 (0:00:00.241) 0:00:35.160 ****** 2025-09-29 06:04:24.866332 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866336 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866339 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866343 | orchestrator | 2025-09-29 06:04:24.866347 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-29 06:04:24.866350 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.146) 0:00:35.307 ****** 2025-09-29 06:04:24.866354 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:24.866358 | orchestrator | 2025-09-29 06:04:24.866362 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-29 06:04:24.866365 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.119) 0:00:35.426 ****** 2025-09-29 06:04:24.866372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866380 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866384 | orchestrator | 2025-09-29 06:04:24.866388 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-29 06:04:24.866391 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.146) 0:00:35.573 ****** 2025-09-29 06:04:24.866395 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866399 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866402 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866406 | orchestrator | 2025-09-29 06:04:24.866410 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-29 06:04:24.866414 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.133) 0:00:35.706 ****** 2025-09-29 06:04:24.866426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:24.866430 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:24.866434 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866438 | orchestrator | 2025-09-29 06:04:24.866441 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-29 06:04:24.866445 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.122) 0:00:35.828 ****** 2025-09-29 06:04:24.866449 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866453 | orchestrator | 2025-09-29 06:04:24.866456 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-29 06:04:24.866460 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.104) 0:00:35.933 ****** 2025-09-29 06:04:24.866464 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866468 | orchestrator | 2025-09-29 06:04:24.866471 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-29 06:04:24.866475 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.117) 0:00:36.051 ****** 2025-09-29 06:04:24.866479 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866482 | orchestrator | 2025-09-29 06:04:24.866486 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-29 06:04:24.866490 | orchestrator | Monday 29 September 2025 06:04:21 +0000 (0:00:00.120) 0:00:36.171 ****** 2025-09-29 06:04:24.866493 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:04:24.866497 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-29 06:04:24.866501 | orchestrator | } 2025-09-29 06:04:24.866505 | orchestrator | 2025-09-29 06:04:24.866509 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-29 06:04:24.866512 | orchestrator | Monday 29 September 2025 06:04:22 +0000 (0:00:00.119) 0:00:36.291 ****** 2025-09-29 06:04:24.866516 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:04:24.866520 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-29 06:04:24.866523 | orchestrator | } 2025-09-29 06:04:24.866527 | orchestrator | 2025-09-29 06:04:24.866531 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-29 06:04:24.866534 | orchestrator | Monday 29 September 2025 06:04:22 +0000 (0:00:00.123) 0:00:36.414 ****** 2025-09-29 06:04:24.866538 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:04:24.866542 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-29 06:04:24.866549 | orchestrator | } 2025-09-29 06:04:24.866553 | orchestrator | 2025-09-29 06:04:24.866556 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-29 06:04:24.866560 | orchestrator | Monday 29 September 2025 06:04:22 +0000 (0:00:00.129) 0:00:36.544 ****** 2025-09-29 06:04:24.866564 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:24.866567 | orchestrator | 2025-09-29 06:04:24.866571 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-29 06:04:24.866575 | orchestrator | Monday 29 September 2025 06:04:22 +0000 (0:00:00.592) 0:00:37.137 ****** 2025-09-29 06:04:24.866579 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:24.866582 | orchestrator | 2025-09-29 06:04:24.866586 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-29 06:04:24.866590 | orchestrator | Monday 29 September 2025 06:04:23 +0000 (0:00:00.486) 0:00:37.623 ****** 2025-09-29 06:04:24.866593 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:24.866597 | orchestrator | 2025-09-29 06:04:24.866601 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-29 06:04:24.866604 | orchestrator | Monday 29 September 2025 06:04:23 +0000 (0:00:00.481) 0:00:38.105 ****** 2025-09-29 06:04:24.866608 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:24.866612 | orchestrator | 2025-09-29 06:04:24.866615 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-29 06:04:24.866619 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.128) 0:00:38.233 ****** 2025-09-29 06:04:24.866623 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866626 | orchestrator | 2025-09-29 06:04:24.866630 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-29 06:04:24.866634 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.094) 0:00:38.328 ****** 2025-09-29 06:04:24.866641 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866645 | orchestrator | 2025-09-29 06:04:24.866649 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-29 06:04:24.866652 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.100) 0:00:38.429 ****** 2025-09-29 06:04:24.866657 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:04:24.866662 | orchestrator |  "vgs_report": { 2025-09-29 06:04:24.866666 | orchestrator |  "vg": [] 2025-09-29 06:04:24.866671 | orchestrator |  } 2025-09-29 06:04:24.866675 | orchestrator | } 2025-09-29 06:04:24.866679 | orchestrator | 2025-09-29 06:04:24.866683 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-29 06:04:24.866688 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.125) 0:00:38.554 ****** 2025-09-29 06:04:24.866692 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866696 | orchestrator | 2025-09-29 06:04:24.866701 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-29 06:04:24.866705 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.123) 0:00:38.678 ****** 2025-09-29 06:04:24.866709 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866714 | orchestrator | 2025-09-29 06:04:24.866718 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-29 06:04:24.866722 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.110) 0:00:38.789 ****** 2025-09-29 06:04:24.866726 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866731 | orchestrator | 2025-09-29 06:04:24.866735 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-29 06:04:24.866739 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.120) 0:00:38.909 ****** 2025-09-29 06:04:24.866743 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:24.866748 | orchestrator | 2025-09-29 06:04:24.866752 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-29 06:04:24.866759 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.141) 0:00:39.051 ****** 2025-09-29 06:04:28.866830 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.866934 | orchestrator | 2025-09-29 06:04:28.866975 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-29 06:04:28.866988 | orchestrator | Monday 29 September 2025 06:04:24 +0000 (0:00:00.117) 0:00:39.168 ****** 2025-09-29 06:04:28.866999 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867009 | orchestrator | 2025-09-29 06:04:28.867020 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-29 06:04:28.867031 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.237) 0:00:39.405 ****** 2025-09-29 06:04:28.867042 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867053 | orchestrator | 2025-09-29 06:04:28.867063 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-29 06:04:28.867074 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.121) 0:00:39.527 ****** 2025-09-29 06:04:28.867085 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867096 | orchestrator | 2025-09-29 06:04:28.867107 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-29 06:04:28.867117 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.116) 0:00:39.643 ****** 2025-09-29 06:04:28.867128 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867138 | orchestrator | 2025-09-29 06:04:28.867149 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-29 06:04:28.867160 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.108) 0:00:39.752 ****** 2025-09-29 06:04:28.867170 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867181 | orchestrator | 2025-09-29 06:04:28.867192 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-29 06:04:28.867202 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.112) 0:00:39.865 ****** 2025-09-29 06:04:28.867213 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867224 | orchestrator | 2025-09-29 06:04:28.867234 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-29 06:04:28.867245 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.115) 0:00:39.980 ****** 2025-09-29 06:04:28.867256 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867266 | orchestrator | 2025-09-29 06:04:28.867277 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-29 06:04:28.867288 | orchestrator | Monday 29 September 2025 06:04:25 +0000 (0:00:00.120) 0:00:40.101 ****** 2025-09-29 06:04:28.867298 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867309 | orchestrator | 2025-09-29 06:04:28.867320 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-29 06:04:28.867331 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.100) 0:00:40.201 ****** 2025-09-29 06:04:28.867344 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867357 | orchestrator | 2025-09-29 06:04:28.867370 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-29 06:04:28.867382 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.115) 0:00:40.317 ****** 2025-09-29 06:04:28.867411 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867439 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867451 | orchestrator | 2025-09-29 06:04:28.867465 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-29 06:04:28.867477 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.132) 0:00:40.449 ****** 2025-09-29 06:04:28.867489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867522 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867535 | orchestrator | 2025-09-29 06:04:28.867547 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-29 06:04:28.867560 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.124) 0:00:40.574 ****** 2025-09-29 06:04:28.867572 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867597 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867610 | orchestrator | 2025-09-29 06:04:28.867623 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-29 06:04:28.867635 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.138) 0:00:40.713 ****** 2025-09-29 06:04:28.867647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867673 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867686 | orchestrator | 2025-09-29 06:04:28.867697 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-29 06:04:28.867739 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.253) 0:00:40.966 ****** 2025-09-29 06:04:28.867752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867774 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867785 | orchestrator | 2025-09-29 06:04:28.867812 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-29 06:04:28.867823 | orchestrator | Monday 29 September 2025 06:04:26 +0000 (0:00:00.138) 0:00:41.105 ****** 2025-09-29 06:04:28.867833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867855 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867867 | orchestrator | 2025-09-29 06:04:28.867878 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-29 06:04:28.867889 | orchestrator | Monday 29 September 2025 06:04:27 +0000 (0:00:00.131) 0:00:41.237 ****** 2025-09-29 06:04:28.867899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867910 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867921 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.867932 | orchestrator | 2025-09-29 06:04:28.867942 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-29 06:04:28.867953 | orchestrator | Monday 29 September 2025 06:04:27 +0000 (0:00:00.146) 0:00:41.383 ****** 2025-09-29 06:04:28.867964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.867983 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.867994 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.868005 | orchestrator | 2025-09-29 06:04:28.868021 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-29 06:04:28.868032 | orchestrator | Monday 29 September 2025 06:04:27 +0000 (0:00:00.140) 0:00:41.524 ****** 2025-09-29 06:04:28.868043 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:28.868054 | orchestrator | 2025-09-29 06:04:28.868065 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-29 06:04:28.868076 | orchestrator | Monday 29 September 2025 06:04:27 +0000 (0:00:00.489) 0:00:42.013 ****** 2025-09-29 06:04:28.868086 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:28.868097 | orchestrator | 2025-09-29 06:04:28.868108 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-29 06:04:28.868119 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.473) 0:00:42.486 ****** 2025-09-29 06:04:28.868129 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:04:28.868140 | orchestrator | 2025-09-29 06:04:28.868151 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-29 06:04:28.868162 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.140) 0:00:42.627 ****** 2025-09-29 06:04:28.868172 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'vg_name': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}) 2025-09-29 06:04:28.868184 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'vg_name': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'}) 2025-09-29 06:04:28.868195 | orchestrator | 2025-09-29 06:04:28.868206 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-29 06:04:28.868217 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.152) 0:00:42.780 ****** 2025-09-29 06:04:28.868228 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.868238 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.868249 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:28.868260 | orchestrator | 2025-09-29 06:04:28.868271 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-29 06:04:28.868281 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.128) 0:00:42.908 ****** 2025-09-29 06:04:28.868292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:28.868303 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:28.868320 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:34.415700 | orchestrator | 2025-09-29 06:04:34.415865 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-29 06:04:34.415887 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.138) 0:00:43.047 ****** 2025-09-29 06:04:34.415901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'})  2025-09-29 06:04:34.415914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'})  2025-09-29 06:04:34.415926 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:04:34.415939 | orchestrator | 2025-09-29 06:04:34.415951 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-29 06:04:34.415962 | orchestrator | Monday 29 September 2025 06:04:28 +0000 (0:00:00.142) 0:00:43.190 ****** 2025-09-29 06:04:34.415998 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:04:34.416011 | orchestrator |  "lvm_report": { 2025-09-29 06:04:34.416024 | orchestrator |  "lv": [ 2025-09-29 06:04:34.416035 | orchestrator |  { 2025-09-29 06:04:34.416046 | orchestrator |  "lv_name": "osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a", 2025-09-29 06:04:34.416059 | orchestrator |  "vg_name": "ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a" 2025-09-29 06:04:34.416070 | orchestrator |  }, 2025-09-29 06:04:34.416080 | orchestrator |  { 2025-09-29 06:04:34.416091 | orchestrator |  "lv_name": "osd-block-46f249ea-6148-566c-bc01-762c6d5847ca", 2025-09-29 06:04:34.416103 | orchestrator |  "vg_name": "ceph-46f249ea-6148-566c-bc01-762c6d5847ca" 2025-09-29 06:04:34.416114 | orchestrator |  } 2025-09-29 06:04:34.416125 | orchestrator |  ], 2025-09-29 06:04:34.416135 | orchestrator |  "pv": [ 2025-09-29 06:04:34.416146 | orchestrator |  { 2025-09-29 06:04:34.416157 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-29 06:04:34.416167 | orchestrator |  "vg_name": "ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a" 2025-09-29 06:04:34.416179 | orchestrator |  }, 2025-09-29 06:04:34.416190 | orchestrator |  { 2025-09-29 06:04:34.416201 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-29 06:04:34.416212 | orchestrator |  "vg_name": "ceph-46f249ea-6148-566c-bc01-762c6d5847ca" 2025-09-29 06:04:34.416223 | orchestrator |  } 2025-09-29 06:04:34.416234 | orchestrator |  ] 2025-09-29 06:04:34.416245 | orchestrator |  } 2025-09-29 06:04:34.416258 | orchestrator | } 2025-09-29 06:04:34.416270 | orchestrator | 2025-09-29 06:04:34.416282 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-29 06:04:34.416293 | orchestrator | 2025-09-29 06:04:34.416305 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:04:34.416317 | orchestrator | Monday 29 September 2025 06:04:29 +0000 (0:00:00.391) 0:00:43.582 ****** 2025-09-29 06:04:34.416329 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-29 06:04:34.416341 | orchestrator | 2025-09-29 06:04:34.416353 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-29 06:04:34.416365 | orchestrator | Monday 29 September 2025 06:04:29 +0000 (0:00:00.268) 0:00:43.850 ****** 2025-09-29 06:04:34.416377 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:34.416390 | orchestrator | 2025-09-29 06:04:34.416402 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416414 | orchestrator | Monday 29 September 2025 06:04:29 +0000 (0:00:00.248) 0:00:44.099 ****** 2025-09-29 06:04:34.416426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-29 06:04:34.416438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-29 06:04:34.416450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-29 06:04:34.416462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-29 06:04:34.416474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-29 06:04:34.416486 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-29 06:04:34.416498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-29 06:04:34.416510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-29 06:04:34.416522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-29 06:04:34.416541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-29 06:04:34.416562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-29 06:04:34.416594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-29 06:04:34.416609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-29 06:04:34.416621 | orchestrator | 2025-09-29 06:04:34.416633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416645 | orchestrator | Monday 29 September 2025 06:04:30 +0000 (0:00:00.412) 0:00:44.511 ****** 2025-09-29 06:04:34.416657 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.416673 | orchestrator | 2025-09-29 06:04:34.416684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416696 | orchestrator | Monday 29 September 2025 06:04:30 +0000 (0:00:00.189) 0:00:44.701 ****** 2025-09-29 06:04:34.416707 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.416718 | orchestrator | 2025-09-29 06:04:34.416730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416760 | orchestrator | Monday 29 September 2025 06:04:30 +0000 (0:00:00.208) 0:00:44.909 ****** 2025-09-29 06:04:34.416773 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.416784 | orchestrator | 2025-09-29 06:04:34.416826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416837 | orchestrator | Monday 29 September 2025 06:04:30 +0000 (0:00:00.200) 0:00:45.110 ****** 2025-09-29 06:04:34.416848 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.416859 | orchestrator | 2025-09-29 06:04:34.416869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416881 | orchestrator | Monday 29 September 2025 06:04:31 +0000 (0:00:00.212) 0:00:45.323 ****** 2025-09-29 06:04:34.416892 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.416903 | orchestrator | 2025-09-29 06:04:34.416964 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.416978 | orchestrator | Monday 29 September 2025 06:04:31 +0000 (0:00:00.189) 0:00:45.512 ****** 2025-09-29 06:04:34.416989 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.417000 | orchestrator | 2025-09-29 06:04:34.417011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417022 | orchestrator | Monday 29 September 2025 06:04:31 +0000 (0:00:00.457) 0:00:45.970 ****** 2025-09-29 06:04:34.417033 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.417045 | orchestrator | 2025-09-29 06:04:34.417056 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417066 | orchestrator | Monday 29 September 2025 06:04:31 +0000 (0:00:00.189) 0:00:46.160 ****** 2025-09-29 06:04:34.417077 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:34.417087 | orchestrator | 2025-09-29 06:04:34.417099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417109 | orchestrator | Monday 29 September 2025 06:04:32 +0000 (0:00:00.172) 0:00:46.333 ****** 2025-09-29 06:04:34.417121 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493) 2025-09-29 06:04:34.417133 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493) 2025-09-29 06:04:34.417144 | orchestrator | 2025-09-29 06:04:34.417155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417166 | orchestrator | Monday 29 September 2025 06:04:32 +0000 (0:00:00.375) 0:00:46.708 ****** 2025-09-29 06:04:34.417177 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1) 2025-09-29 06:04:34.417188 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1) 2025-09-29 06:04:34.417199 | orchestrator | 2025-09-29 06:04:34.417210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417220 | orchestrator | Monday 29 September 2025 06:04:32 +0000 (0:00:00.415) 0:00:47.124 ****** 2025-09-29 06:04:34.417245 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330) 2025-09-29 06:04:34.417256 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330) 2025-09-29 06:04:34.417267 | orchestrator | 2025-09-29 06:04:34.417278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417290 | orchestrator | Monday 29 September 2025 06:04:33 +0000 (0:00:00.379) 0:00:47.504 ****** 2025-09-29 06:04:34.417301 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5) 2025-09-29 06:04:34.417312 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5) 2025-09-29 06:04:34.417322 | orchestrator | 2025-09-29 06:04:34.417333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-29 06:04:34.417344 | orchestrator | Monday 29 September 2025 06:04:33 +0000 (0:00:00.392) 0:00:47.897 ****** 2025-09-29 06:04:34.417355 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-29 06:04:34.417365 | orchestrator | 2025-09-29 06:04:34.417376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:34.417387 | orchestrator | Monday 29 September 2025 06:04:34 +0000 (0:00:00.321) 0:00:48.218 ****** 2025-09-29 06:04:34.417398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-29 06:04:34.417409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-29 06:04:34.417420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-29 06:04:34.417430 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-29 06:04:34.417441 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-29 06:04:34.417452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-29 06:04:34.417464 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-29 06:04:34.417475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-29 06:04:34.417485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-29 06:04:34.417497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-29 06:04:34.417508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-29 06:04:34.417528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-29 06:04:42.447194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-29 06:04:42.447305 | orchestrator | 2025-09-29 06:04:42.447322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447334 | orchestrator | Monday 29 September 2025 06:04:34 +0000 (0:00:00.376) 0:00:48.594 ****** 2025-09-29 06:04:42.447345 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447357 | orchestrator | 2025-09-29 06:04:42.447368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447379 | orchestrator | Monday 29 September 2025 06:04:34 +0000 (0:00:00.178) 0:00:48.772 ****** 2025-09-29 06:04:42.447390 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447401 | orchestrator | 2025-09-29 06:04:42.447411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447422 | orchestrator | Monday 29 September 2025 06:04:34 +0000 (0:00:00.184) 0:00:48.957 ****** 2025-09-29 06:04:42.447433 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447444 | orchestrator | 2025-09-29 06:04:42.447454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447489 | orchestrator | Monday 29 September 2025 06:04:35 +0000 (0:00:00.436) 0:00:49.394 ****** 2025-09-29 06:04:42.447501 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447512 | orchestrator | 2025-09-29 06:04:42.447522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447533 | orchestrator | Monday 29 September 2025 06:04:35 +0000 (0:00:00.178) 0:00:49.573 ****** 2025-09-29 06:04:42.447544 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447554 | orchestrator | 2025-09-29 06:04:42.447565 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447575 | orchestrator | Monday 29 September 2025 06:04:35 +0000 (0:00:00.170) 0:00:49.743 ****** 2025-09-29 06:04:42.447586 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447597 | orchestrator | 2025-09-29 06:04:42.447607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447618 | orchestrator | Monday 29 September 2025 06:04:35 +0000 (0:00:00.177) 0:00:49.921 ****** 2025-09-29 06:04:42.447628 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447639 | orchestrator | 2025-09-29 06:04:42.447649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447660 | orchestrator | Monday 29 September 2025 06:04:35 +0000 (0:00:00.183) 0:00:50.104 ****** 2025-09-29 06:04:42.447670 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447681 | orchestrator | 2025-09-29 06:04:42.447691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447702 | orchestrator | Monday 29 September 2025 06:04:36 +0000 (0:00:00.204) 0:00:50.309 ****** 2025-09-29 06:04:42.447712 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-29 06:04:42.447724 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-29 06:04:42.447752 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-29 06:04:42.447765 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-29 06:04:42.447777 | orchestrator | 2025-09-29 06:04:42.447824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447837 | orchestrator | Monday 29 September 2025 06:04:36 +0000 (0:00:00.585) 0:00:50.894 ****** 2025-09-29 06:04:42.447850 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447862 | orchestrator | 2025-09-29 06:04:42.447874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447887 | orchestrator | Monday 29 September 2025 06:04:36 +0000 (0:00:00.168) 0:00:51.062 ****** 2025-09-29 06:04:42.447899 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447911 | orchestrator | 2025-09-29 06:04:42.447925 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447937 | orchestrator | Monday 29 September 2025 06:04:37 +0000 (0:00:00.166) 0:00:51.229 ****** 2025-09-29 06:04:42.447949 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.447961 | orchestrator | 2025-09-29 06:04:42.447973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-29 06:04:42.447985 | orchestrator | Monday 29 September 2025 06:04:37 +0000 (0:00:00.175) 0:00:51.405 ****** 2025-09-29 06:04:42.447998 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448010 | orchestrator | 2025-09-29 06:04:42.448023 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-29 06:04:42.448036 | orchestrator | Monday 29 September 2025 06:04:37 +0000 (0:00:00.189) 0:00:51.595 ****** 2025-09-29 06:04:42.448049 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448061 | orchestrator | 2025-09-29 06:04:42.448074 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-29 06:04:42.448085 | orchestrator | Monday 29 September 2025 06:04:37 +0000 (0:00:00.297) 0:00:51.892 ****** 2025-09-29 06:04:42.448096 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6be24fb8-e256-5721-a6a2-6a7f57bf9910'}}) 2025-09-29 06:04:42.448107 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ed2553fc-8d98-5289-a275-720d5101f8b0'}}) 2025-09-29 06:04:42.448126 | orchestrator | 2025-09-29 06:04:42.448137 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-29 06:04:42.448147 | orchestrator | Monday 29 September 2025 06:04:37 +0000 (0:00:00.168) 0:00:52.061 ****** 2025-09-29 06:04:42.448159 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'}) 2025-09-29 06:04:42.448172 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'}) 2025-09-29 06:04:42.448182 | orchestrator | 2025-09-29 06:04:42.448193 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-29 06:04:42.448221 | orchestrator | Monday 29 September 2025 06:04:39 +0000 (0:00:01.773) 0:00:53.834 ****** 2025-09-29 06:04:42.448233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:42.448245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:42.448256 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448267 | orchestrator | 2025-09-29 06:04:42.448278 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-29 06:04:42.448288 | orchestrator | Monday 29 September 2025 06:04:39 +0000 (0:00:00.167) 0:00:54.002 ****** 2025-09-29 06:04:42.448299 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'}) 2025-09-29 06:04:42.448310 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'}) 2025-09-29 06:04:42.448321 | orchestrator | 2025-09-29 06:04:42.448332 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-29 06:04:42.448343 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:01.270) 0:00:55.272 ****** 2025-09-29 06:04:42.448353 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:42.448364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:42.448375 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448386 | orchestrator | 2025-09-29 06:04:42.448396 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-29 06:04:42.448407 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.150) 0:00:55.423 ****** 2025-09-29 06:04:42.448417 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448428 | orchestrator | 2025-09-29 06:04:42.448439 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-29 06:04:42.448449 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.148) 0:00:55.571 ****** 2025-09-29 06:04:42.448460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:42.448476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:42.448487 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448498 | orchestrator | 2025-09-29 06:04:42.448509 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-29 06:04:42.448519 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.140) 0:00:55.712 ****** 2025-09-29 06:04:42.448530 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448548 | orchestrator | 2025-09-29 06:04:42.448559 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-29 06:04:42.448569 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.102) 0:00:55.815 ****** 2025-09-29 06:04:42.448580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:42.448591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:42.448602 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448612 | orchestrator | 2025-09-29 06:04:42.448623 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-29 06:04:42.448634 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.130) 0:00:55.945 ****** 2025-09-29 06:04:42.448645 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448655 | orchestrator | 2025-09-29 06:04:42.448666 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-29 06:04:42.448676 | orchestrator | Monday 29 September 2025 06:04:41 +0000 (0:00:00.131) 0:00:56.077 ****** 2025-09-29 06:04:42.448687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:42.448698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:42.448709 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:42.448719 | orchestrator | 2025-09-29 06:04:42.448730 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-29 06:04:42.448741 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.133) 0:00:56.210 ****** 2025-09-29 06:04:42.448751 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:42.448762 | orchestrator | 2025-09-29 06:04:42.448773 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-29 06:04:42.448783 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.275) 0:00:56.486 ****** 2025-09-29 06:04:42.448829 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:48.231849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:48.231946 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.231958 | orchestrator | 2025-09-29 06:04:48.231967 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-29 06:04:48.231976 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.145) 0:00:56.631 ****** 2025-09-29 06:04:48.231984 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:48.231992 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:48.232000 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232007 | orchestrator | 2025-09-29 06:04:48.232015 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-29 06:04:48.232023 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.130) 0:00:56.762 ****** 2025-09-29 06:04:48.232030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:48.232037 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:48.232045 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232073 | orchestrator | 2025-09-29 06:04:48.232081 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-29 06:04:48.232088 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.126) 0:00:56.888 ****** 2025-09-29 06:04:48.232095 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232102 | orchestrator | 2025-09-29 06:04:48.232110 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-29 06:04:48.232117 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.142) 0:00:57.030 ****** 2025-09-29 06:04:48.232124 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232131 | orchestrator | 2025-09-29 06:04:48.232138 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-29 06:04:48.232146 | orchestrator | Monday 29 September 2025 06:04:42 +0000 (0:00:00.114) 0:00:57.144 ****** 2025-09-29 06:04:48.232153 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232160 | orchestrator | 2025-09-29 06:04:48.232167 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-29 06:04:48.232175 | orchestrator | Monday 29 September 2025 06:04:43 +0000 (0:00:00.117) 0:00:57.262 ****** 2025-09-29 06:04:48.232182 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:04:48.232191 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-29 06:04:48.232198 | orchestrator | } 2025-09-29 06:04:48.232205 | orchestrator | 2025-09-29 06:04:48.232213 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-29 06:04:48.232220 | orchestrator | Monday 29 September 2025 06:04:43 +0000 (0:00:00.133) 0:00:57.396 ****** 2025-09-29 06:04:48.232227 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:04:48.232235 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-29 06:04:48.232242 | orchestrator | } 2025-09-29 06:04:48.232249 | orchestrator | 2025-09-29 06:04:48.232256 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-29 06:04:48.232264 | orchestrator | Monday 29 September 2025 06:04:43 +0000 (0:00:00.128) 0:00:57.525 ****** 2025-09-29 06:04:48.232271 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:04:48.232279 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-29 06:04:48.232286 | orchestrator | } 2025-09-29 06:04:48.232293 | orchestrator | 2025-09-29 06:04:48.232301 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-29 06:04:48.232308 | orchestrator | Monday 29 September 2025 06:04:43 +0000 (0:00:00.139) 0:00:57.664 ****** 2025-09-29 06:04:48.232315 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:48.232323 | orchestrator | 2025-09-29 06:04:48.232330 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-29 06:04:48.232337 | orchestrator | Monday 29 September 2025 06:04:43 +0000 (0:00:00.499) 0:00:58.164 ****** 2025-09-29 06:04:48.232345 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:48.232352 | orchestrator | 2025-09-29 06:04:48.232359 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-29 06:04:48.232366 | orchestrator | Monday 29 September 2025 06:04:44 +0000 (0:00:00.480) 0:00:58.644 ****** 2025-09-29 06:04:48.232373 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:48.232381 | orchestrator | 2025-09-29 06:04:48.232388 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-29 06:04:48.232395 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.717) 0:00:59.362 ****** 2025-09-29 06:04:48.232402 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:48.232410 | orchestrator | 2025-09-29 06:04:48.232417 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-29 06:04:48.232424 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.200) 0:00:59.562 ****** 2025-09-29 06:04:48.232432 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232439 | orchestrator | 2025-09-29 06:04:48.232446 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-29 06:04:48.232453 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.120) 0:00:59.683 ****** 2025-09-29 06:04:48.232467 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232474 | orchestrator | 2025-09-29 06:04:48.232481 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-29 06:04:48.232488 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.128) 0:00:59.811 ****** 2025-09-29 06:04:48.232496 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:04:48.232519 | orchestrator |  "vgs_report": { 2025-09-29 06:04:48.232526 | orchestrator |  "vg": [] 2025-09-29 06:04:48.232547 | orchestrator |  } 2025-09-29 06:04:48.232555 | orchestrator | } 2025-09-29 06:04:48.232562 | orchestrator | 2025-09-29 06:04:48.232569 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-29 06:04:48.232577 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.148) 0:00:59.959 ****** 2025-09-29 06:04:48.232584 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232591 | orchestrator | 2025-09-29 06:04:48.232598 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-29 06:04:48.232606 | orchestrator | Monday 29 September 2025 06:04:45 +0000 (0:00:00.138) 0:01:00.098 ****** 2025-09-29 06:04:48.232613 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232620 | orchestrator | 2025-09-29 06:04:48.232627 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-29 06:04:48.232634 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.147) 0:01:00.245 ****** 2025-09-29 06:04:48.232642 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232649 | orchestrator | 2025-09-29 06:04:48.232656 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-29 06:04:48.232663 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.149) 0:01:00.394 ****** 2025-09-29 06:04:48.232670 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232678 | orchestrator | 2025-09-29 06:04:48.232685 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-29 06:04:48.232692 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.149) 0:01:00.544 ****** 2025-09-29 06:04:48.232700 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232707 | orchestrator | 2025-09-29 06:04:48.232714 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-29 06:04:48.232721 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.118) 0:01:00.662 ****** 2025-09-29 06:04:48.232728 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232736 | orchestrator | 2025-09-29 06:04:48.232743 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-29 06:04:48.232750 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.137) 0:01:00.800 ****** 2025-09-29 06:04:48.232757 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232764 | orchestrator | 2025-09-29 06:04:48.232772 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-29 06:04:48.232779 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.129) 0:01:00.929 ****** 2025-09-29 06:04:48.232802 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232810 | orchestrator | 2025-09-29 06:04:48.232817 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-29 06:04:48.232824 | orchestrator | Monday 29 September 2025 06:04:46 +0000 (0:00:00.126) 0:01:01.056 ****** 2025-09-29 06:04:48.232831 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232839 | orchestrator | 2025-09-29 06:04:48.232846 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-29 06:04:48.232857 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.330) 0:01:01.386 ****** 2025-09-29 06:04:48.232865 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232872 | orchestrator | 2025-09-29 06:04:48.232879 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-29 06:04:48.232887 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.134) 0:01:01.521 ****** 2025-09-29 06:04:48.232894 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232906 | orchestrator | 2025-09-29 06:04:48.232913 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-29 06:04:48.232921 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.126) 0:01:01.647 ****** 2025-09-29 06:04:48.232928 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232935 | orchestrator | 2025-09-29 06:04:48.232942 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-29 06:04:48.232950 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.122) 0:01:01.770 ****** 2025-09-29 06:04:48.232957 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232964 | orchestrator | 2025-09-29 06:04:48.232971 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-29 06:04:48.232979 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.102) 0:01:01.873 ****** 2025-09-29 06:04:48.232986 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.232993 | orchestrator | 2025-09-29 06:04:48.233001 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-29 06:04:48.233008 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.130) 0:01:02.003 ****** 2025-09-29 06:04:48.233015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:48.233023 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:48.233030 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.233037 | orchestrator | 2025-09-29 06:04:48.233045 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-29 06:04:48.233052 | orchestrator | Monday 29 September 2025 06:04:47 +0000 (0:00:00.135) 0:01:02.139 ****** 2025-09-29 06:04:48.233059 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:48.233066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:48.233074 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:48.233081 | orchestrator | 2025-09-29 06:04:48.233088 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-29 06:04:48.233096 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.144) 0:01:02.283 ****** 2025-09-29 06:04:48.233108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842465 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842480 | orchestrator | 2025-09-29 06:04:50.842492 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-29 06:04:50.842505 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.133) 0:01:02.416 ****** 2025-09-29 06:04:50.842516 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842538 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842548 | orchestrator | 2025-09-29 06:04:50.842559 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-29 06:04:50.842570 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.119) 0:01:02.536 ****** 2025-09-29 06:04:50.842581 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842626 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842637 | orchestrator | 2025-09-29 06:04:50.842648 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-29 06:04:50.842658 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.136) 0:01:02.672 ****** 2025-09-29 06:04:50.842669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842690 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842701 | orchestrator | 2025-09-29 06:04:50.842725 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-29 06:04:50.842736 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.114) 0:01:02.786 ****** 2025-09-29 06:04:50.842747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842769 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842779 | orchestrator | 2025-09-29 06:04:50.842838 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-29 06:04:50.842851 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.258) 0:01:03.045 ****** 2025-09-29 06:04:50.842862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.842872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.842883 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.842894 | orchestrator | 2025-09-29 06:04:50.842908 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-29 06:04:50.842921 | orchestrator | Monday 29 September 2025 06:04:48 +0000 (0:00:00.137) 0:01:03.182 ****** 2025-09-29 06:04:50.842934 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:50.842947 | orchestrator | 2025-09-29 06:04:50.842960 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-29 06:04:50.842972 | orchestrator | Monday 29 September 2025 06:04:49 +0000 (0:00:00.480) 0:01:03.663 ****** 2025-09-29 06:04:50.842985 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:50.842997 | orchestrator | 2025-09-29 06:04:50.843009 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-29 06:04:50.843021 | orchestrator | Monday 29 September 2025 06:04:49 +0000 (0:00:00.514) 0:01:04.177 ****** 2025-09-29 06:04:50.843033 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:04:50.843046 | orchestrator | 2025-09-29 06:04:50.843058 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-29 06:04:50.843070 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.129) 0:01:04.306 ****** 2025-09-29 06:04:50.843082 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'vg_name': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'}) 2025-09-29 06:04:50.843095 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'vg_name': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'}) 2025-09-29 06:04:50.843107 | orchestrator | 2025-09-29 06:04:50.843119 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-29 06:04:50.843140 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.156) 0:01:04.462 ****** 2025-09-29 06:04:50.843170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.843184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.843196 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.843208 | orchestrator | 2025-09-29 06:04:50.843221 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-29 06:04:50.843233 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.136) 0:01:04.599 ****** 2025-09-29 06:04:50.843245 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.843258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.843269 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.843280 | orchestrator | 2025-09-29 06:04:50.843291 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-29 06:04:50.843301 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.139) 0:01:04.738 ****** 2025-09-29 06:04:50.843312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'})  2025-09-29 06:04:50.843323 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'})  2025-09-29 06:04:50.843334 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:04:50.843344 | orchestrator | 2025-09-29 06:04:50.843355 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-29 06:04:50.843366 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.128) 0:01:04.866 ****** 2025-09-29 06:04:50.843376 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:04:50.843387 | orchestrator |  "lvm_report": { 2025-09-29 06:04:50.843398 | orchestrator |  "lv": [ 2025-09-29 06:04:50.843409 | orchestrator |  { 2025-09-29 06:04:50.843419 | orchestrator |  "lv_name": "osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910", 2025-09-29 06:04:50.843436 | orchestrator |  "vg_name": "ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910" 2025-09-29 06:04:50.843447 | orchestrator |  }, 2025-09-29 06:04:50.843458 | orchestrator |  { 2025-09-29 06:04:50.843469 | orchestrator |  "lv_name": "osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0", 2025-09-29 06:04:50.843479 | orchestrator |  "vg_name": "ceph-ed2553fc-8d98-5289-a275-720d5101f8b0" 2025-09-29 06:04:50.843490 | orchestrator |  } 2025-09-29 06:04:50.843500 | orchestrator |  ], 2025-09-29 06:04:50.843511 | orchestrator |  "pv": [ 2025-09-29 06:04:50.843521 | orchestrator |  { 2025-09-29 06:04:50.843532 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-29 06:04:50.843543 | orchestrator |  "vg_name": "ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910" 2025-09-29 06:04:50.843553 | orchestrator |  }, 2025-09-29 06:04:50.843564 | orchestrator |  { 2025-09-29 06:04:50.843575 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-29 06:04:50.843585 | orchestrator |  "vg_name": "ceph-ed2553fc-8d98-5289-a275-720d5101f8b0" 2025-09-29 06:04:50.843596 | orchestrator |  } 2025-09-29 06:04:50.843606 | orchestrator |  ] 2025-09-29 06:04:50.843617 | orchestrator |  } 2025-09-29 06:04:50.843628 | orchestrator | } 2025-09-29 06:04:50.843639 | orchestrator | 2025-09-29 06:04:50.843650 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:04:50.843668 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-29 06:04:50.843679 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-29 06:04:50.843690 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-29 06:04:50.843701 | orchestrator | 2025-09-29 06:04:50.843711 | orchestrator | 2025-09-29 06:04:50.843722 | orchestrator | 2025-09-29 06:04:50.843733 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:04:50.843744 | orchestrator | Monday 29 September 2025 06:04:50 +0000 (0:00:00.126) 0:01:04.993 ****** 2025-09-29 06:04:50.843754 | orchestrator | =============================================================================== 2025-09-29 06:04:50.843765 | orchestrator | Create block VGs -------------------------------------------------------- 5.52s 2025-09-29 06:04:50.843775 | orchestrator | Create block LVs -------------------------------------------------------- 3.87s 2025-09-29 06:04:50.843801 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2025-09-29 06:04:50.843813 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.68s 2025-09-29 06:04:50.843823 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.48s 2025-09-29 06:04:50.843834 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.48s 2025-09-29 06:04:50.843844 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.45s 2025-09-29 06:04:50.843855 | orchestrator | Add known partitions to the list of available block devices ------------- 1.26s 2025-09-29 06:04:50.843872 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2025-09-29 06:04:51.061113 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-09-29 06:04:51.061206 | orchestrator | Print LVM report data --------------------------------------------------- 0.78s 2025-09-29 06:04:51.061219 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-09-29 06:04:51.061229 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2025-09-29 06:04:51.061239 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.73s 2025-09-29 06:04:51.061248 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-09-29 06:04:51.061258 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-29 06:04:51.061268 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.64s 2025-09-29 06:04:51.061277 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-09-29 06:04:51.061286 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.57s 2025-09-29 06:04:51.061296 | orchestrator | Check whether ceph_db_wal_devices is used exclusively ------------------- 0.57s 2025-09-29 06:05:03.102987 | orchestrator | 2025-09-29 06:05:03 | INFO  | Task 24b58b1d-7900-49fb-9e1e-405265533808 (facts) was prepared for execution. 2025-09-29 06:05:03.103096 | orchestrator | 2025-09-29 06:05:03 | INFO  | It takes a moment until task 24b58b1d-7900-49fb-9e1e-405265533808 (facts) has been started and output is visible here. 2025-09-29 06:05:15.124272 | orchestrator | 2025-09-29 06:05:15.124348 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-29 06:05:15.124357 | orchestrator | 2025-09-29 06:05:15.124364 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-29 06:05:15.124370 | orchestrator | Monday 29 September 2025 06:05:07 +0000 (0:00:00.245) 0:00:00.245 ****** 2025-09-29 06:05:15.124376 | orchestrator | ok: [testbed-manager] 2025-09-29 06:05:15.124384 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:05:15.124408 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:05:15.124416 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:05:15.124422 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:05:15.124427 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:05:15.124433 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:05:15.124439 | orchestrator | 2025-09-29 06:05:15.124445 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-29 06:05:15.124450 | orchestrator | Monday 29 September 2025 06:05:07 +0000 (0:00:00.924) 0:00:01.169 ****** 2025-09-29 06:05:15.124456 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:05:15.124463 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:05:15.124471 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:05:15.124477 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:05:15.124483 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:05:15.124489 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:05:15.124496 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:05:15.124501 | orchestrator | 2025-09-29 06:05:15.124508 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 06:05:15.124514 | orchestrator | 2025-09-29 06:05:15.124520 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 06:05:15.124527 | orchestrator | Monday 29 September 2025 06:05:09 +0000 (0:00:01.067) 0:00:02.236 ****** 2025-09-29 06:05:15.124533 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:05:15.124540 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:05:15.124546 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:05:15.124552 | orchestrator | ok: [testbed-manager] 2025-09-29 06:05:15.124558 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:05:15.124564 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:05:15.124570 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:05:15.124576 | orchestrator | 2025-09-29 06:05:15.124582 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-29 06:05:15.124587 | orchestrator | 2025-09-29 06:05:15.124593 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-29 06:05:15.124600 | orchestrator | Monday 29 September 2025 06:05:14 +0000 (0:00:05.313) 0:00:07.550 ****** 2025-09-29 06:05:15.124606 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:05:15.124613 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:05:15.124619 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:05:15.124624 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:05:15.124630 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:05:15.124636 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:05:15.124642 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:05:15.124648 | orchestrator | 2025-09-29 06:05:15.124655 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:05:15.124661 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124668 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124674 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124680 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124686 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124692 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124698 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:05:15.124712 | orchestrator | 2025-09-29 06:05:15.124719 | orchestrator | 2025-09-29 06:05:15.124726 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:05:15.124732 | orchestrator | Monday 29 September 2025 06:05:14 +0000 (0:00:00.450) 0:00:08.001 ****** 2025-09-29 06:05:15.124738 | orchestrator | =============================================================================== 2025-09-29 06:05:15.124744 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.31s 2025-09-29 06:05:15.124750 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-09-29 06:05:15.124757 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-09-29 06:05:15.124763 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-09-29 06:05:27.384289 | orchestrator | 2025-09-29 06:05:27 | INFO  | Task 5fe2820c-c4b6-46de-93d6-cb60ced5d070 (frr) was prepared for execution. 2025-09-29 06:05:27.384362 | orchestrator | 2025-09-29 06:05:27 | INFO  | It takes a moment until task 5fe2820c-c4b6-46de-93d6-cb60ced5d070 (frr) has been started and output is visible here. 2025-09-29 06:05:50.900503 | orchestrator | 2025-09-29 06:05:50.900616 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-29 06:05:50.900632 | orchestrator | 2025-09-29 06:05:50.900645 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-29 06:05:50.900657 | orchestrator | Monday 29 September 2025 06:05:31 +0000 (0:00:00.228) 0:00:00.228 ****** 2025-09-29 06:05:50.900685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 06:05:50.900699 | orchestrator | 2025-09-29 06:05:50.900710 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-29 06:05:50.900721 | orchestrator | Monday 29 September 2025 06:05:31 +0000 (0:00:00.223) 0:00:00.452 ****** 2025-09-29 06:05:50.900732 | orchestrator | changed: [testbed-manager] 2025-09-29 06:05:50.900744 | orchestrator | 2025-09-29 06:05:50.900755 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-29 06:05:50.900766 | orchestrator | Monday 29 September 2025 06:05:32 +0000 (0:00:01.053) 0:00:01.505 ****** 2025-09-29 06:05:50.900830 | orchestrator | changed: [testbed-manager] 2025-09-29 06:05:50.900843 | orchestrator | 2025-09-29 06:05:50.900859 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-29 06:05:50.900870 | orchestrator | Monday 29 September 2025 06:05:40 +0000 (0:00:08.341) 0:00:09.847 ****** 2025-09-29 06:05:50.900881 | orchestrator | ok: [testbed-manager] 2025-09-29 06:05:50.900893 | orchestrator | 2025-09-29 06:05:50.900904 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-29 06:05:50.900915 | orchestrator | Monday 29 September 2025 06:05:41 +0000 (0:00:01.098) 0:00:10.946 ****** 2025-09-29 06:05:50.900925 | orchestrator | changed: [testbed-manager] 2025-09-29 06:05:50.900936 | orchestrator | 2025-09-29 06:05:50.900947 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-29 06:05:50.900958 | orchestrator | Monday 29 September 2025 06:05:42 +0000 (0:00:00.822) 0:00:11.768 ****** 2025-09-29 06:05:50.900968 | orchestrator | ok: [testbed-manager] 2025-09-29 06:05:50.900979 | orchestrator | 2025-09-29 06:05:50.900990 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-29 06:05:50.901001 | orchestrator | Monday 29 September 2025 06:05:43 +0000 (0:00:01.112) 0:00:12.881 ****** 2025-09-29 06:05:50.901012 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:05:50.901022 | orchestrator | 2025-09-29 06:05:50.901033 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-29 06:05:50.901046 | orchestrator | Monday 29 September 2025 06:05:44 +0000 (0:00:00.717) 0:00:13.598 ****** 2025-09-29 06:05:50.901060 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:05:50.901073 | orchestrator | 2025-09-29 06:05:50.901087 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-29 06:05:50.901121 | orchestrator | Monday 29 September 2025 06:05:44 +0000 (0:00:00.138) 0:00:13.737 ****** 2025-09-29 06:05:50.901133 | orchestrator | changed: [testbed-manager] 2025-09-29 06:05:50.901144 | orchestrator | 2025-09-29 06:05:50.901154 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-29 06:05:50.901165 | orchestrator | Monday 29 September 2025 06:05:45 +0000 (0:00:00.926) 0:00:14.664 ****** 2025-09-29 06:05:50.901176 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-29 06:05:50.901186 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-29 06:05:50.901198 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-29 06:05:50.901209 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-29 06:05:50.901220 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-29 06:05:50.901230 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-29 06:05:50.901241 | orchestrator | 2025-09-29 06:05:50.901252 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-29 06:05:50.901262 | orchestrator | Monday 29 September 2025 06:05:47 +0000 (0:00:02.254) 0:00:16.918 ****** 2025-09-29 06:05:50.901273 | orchestrator | ok: [testbed-manager] 2025-09-29 06:05:50.901283 | orchestrator | 2025-09-29 06:05:50.901294 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-29 06:05:50.901305 | orchestrator | Monday 29 September 2025 06:05:49 +0000 (0:00:01.350) 0:00:18.269 ****** 2025-09-29 06:05:50.901315 | orchestrator | changed: [testbed-manager] 2025-09-29 06:05:50.901326 | orchestrator | 2025-09-29 06:05:50.901336 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:05:50.901347 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:05:50.901359 | orchestrator | 2025-09-29 06:05:50.901369 | orchestrator | 2025-09-29 06:05:50.901380 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:05:50.901391 | orchestrator | Monday 29 September 2025 06:05:50 +0000 (0:00:01.402) 0:00:19.671 ****** 2025-09-29 06:05:50.901401 | orchestrator | =============================================================================== 2025-09-29 06:05:50.901412 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.34s 2025-09-29 06:05:50.901423 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.25s 2025-09-29 06:05:50.901433 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2025-09-29 06:05:50.901444 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.35s 2025-09-29 06:05:50.901471 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2025-09-29 06:05:50.901483 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.10s 2025-09-29 06:05:50.901493 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.05s 2025-09-29 06:05:50.901504 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.93s 2025-09-29 06:05:50.901515 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.82s 2025-09-29 06:05:50.901526 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.72s 2025-09-29 06:05:50.901536 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2025-09-29 06:05:50.901547 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.14s 2025-09-29 06:05:51.200864 | orchestrator | 2025-09-29 06:05:51.203266 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Sep 29 06:05:51 UTC 2025 2025-09-29 06:05:51.203323 | orchestrator | 2025-09-29 06:05:53.057008 | orchestrator | 2025-09-29 06:05:53 | INFO  | Collection nutshell is prepared for execution 2025-09-29 06:05:53.057106 | orchestrator | 2025-09-29 06:05:53 | INFO  | D [0] - dotfiles 2025-09-29 06:06:03.180598 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [0] - homer 2025-09-29 06:06:03.180704 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [0] - netdata 2025-09-29 06:06:03.180720 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [0] - openstackclient 2025-09-29 06:06:03.180732 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [0] - phpmyadmin 2025-09-29 06:06:03.180743 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [0] - common 2025-09-29 06:06:03.184373 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [1] -- loadbalancer 2025-09-29 06:06:03.184653 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [2] --- opensearch 2025-09-29 06:06:03.184676 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [2] --- mariadb-ng 2025-09-29 06:06:03.185120 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [3] ---- horizon 2025-09-29 06:06:03.185432 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [3] ---- keystone 2025-09-29 06:06:03.185452 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [4] ----- neutron 2025-09-29 06:06:03.185608 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ wait-for-nova 2025-09-29 06:06:03.185628 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [5] ------ octavia 2025-09-29 06:06:03.187498 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- barbican 2025-09-29 06:06:03.187518 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- designate 2025-09-29 06:06:03.187530 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- ironic 2025-09-29 06:06:03.187679 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- placement 2025-09-29 06:06:03.187830 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- magnum 2025-09-29 06:06:03.188904 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [1] -- openvswitch 2025-09-29 06:06:03.188931 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [2] --- ovn 2025-09-29 06:06:03.189536 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [1] -- memcached 2025-09-29 06:06:03.189557 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [1] -- redis 2025-09-29 06:06:03.189568 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [1] -- rabbitmq-ng 2025-09-29 06:06:03.189940 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [0] - kubernetes 2025-09-29 06:06:03.192791 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [1] -- kubeconfig 2025-09-29 06:06:03.192815 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [1] -- copy-kubeconfig 2025-09-29 06:06:03.192826 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [0] - ceph 2025-09-29 06:06:03.195315 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [1] -- ceph-pools 2025-09-29 06:06:03.195335 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [2] --- copy-ceph-keys 2025-09-29 06:06:03.195793 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [3] ---- cephclient 2025-09-29 06:06:03.195814 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-29 06:06:03.195825 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [4] ----- wait-for-keystone 2025-09-29 06:06:03.195998 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-29 06:06:03.196085 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ glance 2025-09-29 06:06:03.196102 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ cinder 2025-09-29 06:06:03.196114 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ nova 2025-09-29 06:06:03.196556 | orchestrator | 2025-09-29 06:06:03 | INFO  | A [4] ----- prometheus 2025-09-29 06:06:03.196578 | orchestrator | 2025-09-29 06:06:03 | INFO  | D [5] ------ grafana 2025-09-29 06:06:03.403918 | orchestrator | 2025-09-29 06:06:03 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-29 06:06:03.404024 | orchestrator | 2025-09-29 06:06:03 | INFO  | Tasks are running in the background 2025-09-29 06:06:06.305931 | orchestrator | 2025-09-29 06:06:06 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-29 06:06:08.470371 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:08.472565 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:08.480177 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:08.480926 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:08.481717 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:08.482937 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:08.485216 | orchestrator | 2025-09-29 06:06:08 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:08.485240 | orchestrator | 2025-09-29 06:06:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:11.544865 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:11.544977 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:11.544992 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:11.545004 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:11.545015 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:11.545026 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:11.545037 | orchestrator | 2025-09-29 06:06:11 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:11.545048 | orchestrator | 2025-09-29 06:06:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:14.585658 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:14.585755 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:14.586327 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:14.589945 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:14.590204 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:14.590652 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:14.591172 | orchestrator | 2025-09-29 06:06:14 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:14.591192 | orchestrator | 2025-09-29 06:06:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:17.641093 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:17.643579 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:17.643621 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:17.643634 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:17.643646 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:17.645435 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:17.647449 | orchestrator | 2025-09-29 06:06:17 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:17.647477 | orchestrator | 2025-09-29 06:06:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:20.712633 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:20.712713 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:20.712722 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:20.712729 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:20.713275 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:20.714895 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:20.715439 | orchestrator | 2025-09-29 06:06:20 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:20.715471 | orchestrator | 2025-09-29 06:06:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:23.782888 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state STARTED 2025-09-29 06:06:23.787119 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:23.788431 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:23.789834 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:23.789869 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:23.790747 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:23.792473 | orchestrator | 2025-09-29 06:06:23 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:23.792500 | orchestrator | 2025-09-29 06:06:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:27.135356 | orchestrator | 2025-09-29 06:06:27.135460 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-29 06:06:27.135476 | orchestrator | 2025-09-29 06:06:27.135489 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-29 06:06:27.135500 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:00.830) 0:00:00.830 ****** 2025-09-29 06:06:27.135511 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:06:27.135523 | orchestrator | changed: [testbed-manager] 2025-09-29 06:06:27.135555 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:06:27.135567 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:06:27.135578 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:06:27.135589 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:06:27.135600 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:06:27.135610 | orchestrator | 2025-09-29 06:06:27.135621 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-29 06:06:27.135632 | orchestrator | Monday 29 September 2025 06:06:19 +0000 (0:00:03.231) 0:00:04.061 ****** 2025-09-29 06:06:27.135643 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-29 06:06:27.135655 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-29 06:06:27.135665 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-29 06:06:27.135676 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-29 06:06:27.135686 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-29 06:06:27.135697 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-29 06:06:27.135707 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-29 06:06:27.135718 | orchestrator | 2025-09-29 06:06:27.135728 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-29 06:06:27.135740 | orchestrator | Monday 29 September 2025 06:06:21 +0000 (0:00:01.527) 0:00:05.589 ****** 2025-09-29 06:06:27.135763 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:20.767482', 'end': '2025-09-29 06:06:20.776135', 'delta': '0:00:00.008653', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.135808 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:20.584295', 'end': '2025-09-29 06:06:20.593289', 'delta': '0:00:00.008994', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.135821 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:20.783798', 'end': '2025-09-29 06:06:20.789917', 'delta': '0:00:00.006119', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.135852 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:20.929749', 'end': '2025-09-29 06:06:20.938648', 'delta': '0:00:00.008899', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.135874 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:21.256481', 'end': '2025-09-29 06:06:21.266394', 'delta': '0:00:00.009913', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.135889 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:21.097956', 'end': '2025-09-29 06:06:21.105607', 'delta': '0:00:00.007651', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.136189 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-29 06:06:21.052826', 'end': '2025-09-29 06:06:21.061328', 'delta': '0:00:00.008502', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-29 06:06:27.136205 | orchestrator | 2025-09-29 06:06:27.136220 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-29 06:06:27.136234 | orchestrator | Monday 29 September 2025 06:06:23 +0000 (0:00:01.640) 0:00:07.230 ****** 2025-09-29 06:06:27.136247 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-29 06:06:27.136258 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-29 06:06:27.136269 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-29 06:06:27.136279 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-29 06:06:27.136290 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-29 06:06:27.136301 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-29 06:06:27.136311 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-29 06:06:27.136334 | orchestrator | 2025-09-29 06:06:27.136345 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-29 06:06:27.136356 | orchestrator | Monday 29 September 2025 06:06:24 +0000 (0:00:01.232) 0:00:08.463 ****** 2025-09-29 06:06:27.136366 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-29 06:06:27.136377 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-29 06:06:27.136388 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-29 06:06:27.136399 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-29 06:06:27.136409 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-29 06:06:27.136420 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-29 06:06:27.136430 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-29 06:06:27.136441 | orchestrator | 2025-09-29 06:06:27.136452 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:06:27.136471 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136484 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136500 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136511 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136522 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136532 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136543 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:06:27.136554 | orchestrator | 2025-09-29 06:06:27.136565 | orchestrator | 2025-09-29 06:06:27.136575 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:06:27.136586 | orchestrator | Monday 29 September 2025 06:06:26 +0000 (0:00:01.896) 0:00:10.359 ****** 2025-09-29 06:06:27.136597 | orchestrator | =============================================================================== 2025-09-29 06:06:27.136608 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.23s 2025-09-29 06:06:27.136619 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.90s 2025-09-29 06:06:27.136629 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.64s 2025-09-29 06:06:27.136640 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.53s 2025-09-29 06:06:27.136650 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.23s 2025-09-29 06:06:27.136661 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task f24cd763-8c8f-423d-8219-cd8feb32cabf is in state SUCCESS 2025-09-29 06:06:27.136672 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:27.136682 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:27.136693 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:27.136704 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:27.136715 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:27.136732 | orchestrator | 2025-09-29 06:06:26 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:27.136743 | orchestrator | 2025-09-29 06:06:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:29.978150 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:29.978273 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:29.978290 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:29.978302 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:29.978313 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:29.978324 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:29.978335 | orchestrator | 2025-09-29 06:06:29 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:29.978347 | orchestrator | 2025-09-29 06:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:33.002967 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:33.003045 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:33.003051 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:33.003056 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:33.003060 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:33.003064 | orchestrator | 2025-09-29 06:06:32 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:33.003068 | orchestrator | 2025-09-29 06:06:33 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:33.003072 | orchestrator | 2025-09-29 06:06:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:36.088690 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:36.088807 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:36.090901 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:36.091224 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:36.091657 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:36.093427 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:36.094200 | orchestrator | 2025-09-29 06:06:36 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:36.094234 | orchestrator | 2025-09-29 06:06:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:39.135236 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:39.137594 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:39.139824 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:39.140233 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:39.140821 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:39.142151 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:39.142588 | orchestrator | 2025-09-29 06:06:39 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:39.142620 | orchestrator | 2025-09-29 06:06:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:42.209429 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:42.209990 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:42.210467 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:42.211136 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:42.213583 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:42.216449 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:42.218203 | orchestrator | 2025-09-29 06:06:42 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:42.218403 | orchestrator | 2025-09-29 06:06:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:45.395041 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:45.396705 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:45.397977 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:45.400386 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:45.401390 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:45.402830 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:45.403727 | orchestrator | 2025-09-29 06:06:45 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:45.403818 | orchestrator | 2025-09-29 06:06:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:48.608052 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:48.608154 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:48.608171 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:48.608202 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:48.608214 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:48.608225 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:48.608261 | orchestrator | 2025-09-29 06:06:48 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:48.608274 | orchestrator | 2025-09-29 06:06:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:51.651922 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:51.652461 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:51.652574 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:51.653317 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:51.653761 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state STARTED 2025-09-29 06:06:51.654940 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:51.654996 | orchestrator | 2025-09-29 06:06:51 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:51.655003 | orchestrator | 2025-09-29 06:06:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:54.705132 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:54.706327 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:54.706865 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:54.708387 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:54.710214 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 14ca01ba-7974-4705-981e-5b68b3f84f23 is in state SUCCESS 2025-09-29 06:06:54.716471 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:54.716564 | orchestrator | 2025-09-29 06:06:54 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:54.716585 | orchestrator | 2025-09-29 06:06:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:06:57.788417 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:06:57.796249 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:06:57.796341 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:06:57.800235 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:06:57.800316 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:06:57.803233 | orchestrator | 2025-09-29 06:06:57 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:06:57.803288 | orchestrator | 2025-09-29 06:06:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:00.847633 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:00.849171 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state STARTED 2025-09-29 06:07:00.849199 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:00.850335 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:00.851330 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:00.852156 | orchestrator | 2025-09-29 06:07:00 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:00.852182 | orchestrator | 2025-09-29 06:07:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:03.938650 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:03.938738 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task 5e0b626d-39ec-43cf-99bc-f4d34917a136 is in state SUCCESS 2025-09-29 06:07:03.943172 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:03.947339 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:03.950247 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:03.956334 | orchestrator | 2025-09-29 06:07:03 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:03.956427 | orchestrator | 2025-09-29 06:07:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:07.023336 | orchestrator | 2025-09-29 06:07:07 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:07.023419 | orchestrator | 2025-09-29 06:07:07 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:07.023428 | orchestrator | 2025-09-29 06:07:07 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:07.023435 | orchestrator | 2025-09-29 06:07:07 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:07.023442 | orchestrator | 2025-09-29 06:07:07 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:07.023449 | orchestrator | 2025-09-29 06:07:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:10.140703 | orchestrator | 2025-09-29 06:07:10 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:10.140952 | orchestrator | 2025-09-29 06:07:10 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:10.142110 | orchestrator | 2025-09-29 06:07:10 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:10.147526 | orchestrator | 2025-09-29 06:07:10 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:10.150534 | orchestrator | 2025-09-29 06:07:10 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:10.150567 | orchestrator | 2025-09-29 06:07:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:13.196020 | orchestrator | 2025-09-29 06:07:13 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:13.196193 | orchestrator | 2025-09-29 06:07:13 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:13.197258 | orchestrator | 2025-09-29 06:07:13 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:13.197969 | orchestrator | 2025-09-29 06:07:13 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:13.198821 | orchestrator | 2025-09-29 06:07:13 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:13.198861 | orchestrator | 2025-09-29 06:07:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:16.518093 | orchestrator | 2025-09-29 06:07:16 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:16.518623 | orchestrator | 2025-09-29 06:07:16 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:16.519747 | orchestrator | 2025-09-29 06:07:16 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:16.520492 | orchestrator | 2025-09-29 06:07:16 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:16.521838 | orchestrator | 2025-09-29 06:07:16 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:16.521984 | orchestrator | 2025-09-29 06:07:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:19.578603 | orchestrator | 2025-09-29 06:07:19 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:19.582613 | orchestrator | 2025-09-29 06:07:19 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:19.584192 | orchestrator | 2025-09-29 06:07:19 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:19.585643 | orchestrator | 2025-09-29 06:07:19 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:19.587157 | orchestrator | 2025-09-29 06:07:19 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:19.587183 | orchestrator | 2025-09-29 06:07:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:22.634326 | orchestrator | 2025-09-29 06:07:22 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:22.634437 | orchestrator | 2025-09-29 06:07:22 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state STARTED 2025-09-29 06:07:22.635609 | orchestrator | 2025-09-29 06:07:22 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:22.635682 | orchestrator | 2025-09-29 06:07:22 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:22.636292 | orchestrator | 2025-09-29 06:07:22 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:22.636828 | orchestrator | 2025-09-29 06:07:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:25.673020 | orchestrator | 2025-09-29 06:07:25 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:25.673942 | orchestrator | 2025-09-29 06:07:25 | INFO  | Task 1f2a6632-3a8c-4192-9744-ab0109e21151 is in state SUCCESS 2025-09-29 06:07:25.674224 | orchestrator | 2025-09-29 06:07:25 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:25.675304 | orchestrator | 2025-09-29 06:07:25.675335 | orchestrator | 2025-09-29 06:07:25.675344 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-29 06:07:25.675352 | orchestrator | 2025-09-29 06:07:25.675360 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-29 06:07:25.675367 | orchestrator | Monday 29 September 2025 06:06:15 +0000 (0:00:00.246) 0:00:00.246 ****** 2025-09-29 06:07:25.675375 | orchestrator | ok: [testbed-manager] => { 2025-09-29 06:07:25.675383 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-29 06:07:25.675391 | orchestrator | } 2025-09-29 06:07:25.675399 | orchestrator | 2025-09-29 06:07:25.675406 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-29 06:07:25.675412 | orchestrator | Monday 29 September 2025 06:06:15 +0000 (0:00:00.240) 0:00:00.486 ****** 2025-09-29 06:07:25.675436 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.675444 | orchestrator | 2025-09-29 06:07:25.675451 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-29 06:07:25.675458 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:01.029) 0:00:01.516 ****** 2025-09-29 06:07:25.675465 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-29 06:07:25.675471 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-29 06:07:25.675478 | orchestrator | 2025-09-29 06:07:25.675485 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-29 06:07:25.675492 | orchestrator | Monday 29 September 2025 06:06:18 +0000 (0:00:01.771) 0:00:03.287 ****** 2025-09-29 06:07:25.675498 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.675505 | orchestrator | 2025-09-29 06:07:25.675512 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-29 06:07:25.675518 | orchestrator | Monday 29 September 2025 06:06:20 +0000 (0:00:02.239) 0:00:05.527 ****** 2025-09-29 06:07:25.675525 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.675531 | orchestrator | 2025-09-29 06:07:25.675538 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-29 06:07:25.675559 | orchestrator | Monday 29 September 2025 06:06:23 +0000 (0:00:02.871) 0:00:08.398 ****** 2025-09-29 06:07:25.675566 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-29 06:07:25.675573 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.675579 | orchestrator | 2025-09-29 06:07:25.675586 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-29 06:07:25.675593 | orchestrator | Monday 29 September 2025 06:06:50 +0000 (0:00:27.361) 0:00:35.759 ****** 2025-09-29 06:07:25.675599 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.675606 | orchestrator | 2025-09-29 06:07:25.675612 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:07:25.675619 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.675627 | orchestrator | 2025-09-29 06:07:25.675634 | orchestrator | 2025-09-29 06:07:25.675640 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:07:25.675647 | orchestrator | Monday 29 September 2025 06:06:53 +0000 (0:00:03.130) 0:00:38.890 ****** 2025-09-29 06:07:25.675654 | orchestrator | =============================================================================== 2025-09-29 06:07:25.675661 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.36s 2025-09-29 06:07:25.675668 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.13s 2025-09-29 06:07:25.675674 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.87s 2025-09-29 06:07:25.675681 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.24s 2025-09-29 06:07:25.675688 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.77s 2025-09-29 06:07:25.675695 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.03s 2025-09-29 06:07:25.675701 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.24s 2025-09-29 06:07:25.675708 | orchestrator | 2025-09-29 06:07:25.675715 | orchestrator | 2025-09-29 06:07:25.675721 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-29 06:07:25.675728 | orchestrator | 2025-09-29 06:07:25.675735 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-29 06:07:25.675741 | orchestrator | Monday 29 September 2025 06:06:17 +0000 (0:00:00.871) 0:00:00.871 ****** 2025-09-29 06:07:25.675748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-29 06:07:25.675778 | orchestrator | 2025-09-29 06:07:25.675791 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-29 06:07:25.675830 | orchestrator | Monday 29 September 2025 06:06:17 +0000 (0:00:00.705) 0:00:01.576 ****** 2025-09-29 06:07:25.675837 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-29 06:07:25.675844 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-29 06:07:25.675851 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-29 06:07:25.675858 | orchestrator | 2025-09-29 06:07:25.675864 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-29 06:07:25.675871 | orchestrator | Monday 29 September 2025 06:06:19 +0000 (0:00:01.473) 0:00:03.050 ****** 2025-09-29 06:07:25.675877 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.675884 | orchestrator | 2025-09-29 06:07:25.675891 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-29 06:07:25.675897 | orchestrator | Monday 29 September 2025 06:06:21 +0000 (0:00:02.419) 0:00:05.470 ****** 2025-09-29 06:07:25.675913 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-29 06:07:25.675920 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.675927 | orchestrator | 2025-09-29 06:07:25.675934 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-29 06:07:25.675940 | orchestrator | Monday 29 September 2025 06:06:54 +0000 (0:00:32.659) 0:00:38.129 ****** 2025-09-29 06:07:25.675948 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.675954 | orchestrator | 2025-09-29 06:07:25.675961 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-29 06:07:25.675967 | orchestrator | Monday 29 September 2025 06:06:55 +0000 (0:00:01.550) 0:00:39.680 ****** 2025-09-29 06:07:25.675974 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.675981 | orchestrator | 2025-09-29 06:07:25.675987 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-29 06:07:25.675994 | orchestrator | Monday 29 September 2025 06:06:57 +0000 (0:00:01.406) 0:00:41.086 ****** 2025-09-29 06:07:25.676001 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676007 | orchestrator | 2025-09-29 06:07:25.676014 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-29 06:07:25.676020 | orchestrator | Monday 29 September 2025 06:07:00 +0000 (0:00:02.798) 0:00:43.885 ****** 2025-09-29 06:07:25.676027 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676033 | orchestrator | 2025-09-29 06:07:25.676040 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-29 06:07:25.676047 | orchestrator | Monday 29 September 2025 06:07:01 +0000 (0:00:01.255) 0:00:45.141 ****** 2025-09-29 06:07:25.676053 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676060 | orchestrator | 2025-09-29 06:07:25.676066 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-29 06:07:25.676073 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.856) 0:00:45.997 ****** 2025-09-29 06:07:25.676079 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.676086 | orchestrator | 2025-09-29 06:07:25.676092 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:07:25.676099 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.676106 | orchestrator | 2025-09-29 06:07:25.676113 | orchestrator | 2025-09-29 06:07:25.676119 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:07:25.676126 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.577) 0:00:46.575 ****** 2025-09-29 06:07:25.676132 | orchestrator | =============================================================================== 2025-09-29 06:07:25.676139 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.66s 2025-09-29 06:07:25.676145 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.80s 2025-09-29 06:07:25.676156 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.42s 2025-09-29 06:07:25.676163 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.55s 2025-09-29 06:07:25.676169 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.47s 2025-09-29 06:07:25.676176 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.41s 2025-09-29 06:07:25.676182 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.26s 2025-09-29 06:07:25.676189 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.86s 2025-09-29 06:07:25.676196 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.71s 2025-09-29 06:07:25.676202 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.58s 2025-09-29 06:07:25.676209 | orchestrator | 2025-09-29 06:07:25.676215 | orchestrator | 2025-09-29 06:07:25.676222 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:07:25.676228 | orchestrator | 2025-09-29 06:07:25.676235 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:07:25.676242 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:00.758) 0:00:00.758 ****** 2025-09-29 06:07:25.676250 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-29 06:07:25.676257 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-29 06:07:25.676264 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-29 06:07:25.676270 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-29 06:07:25.676277 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-29 06:07:25.676283 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-29 06:07:25.676290 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-29 06:07:25.676296 | orchestrator | 2025-09-29 06:07:25.676303 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-29 06:07:25.676309 | orchestrator | 2025-09-29 06:07:25.676316 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-29 06:07:25.676323 | orchestrator | Monday 29 September 2025 06:06:18 +0000 (0:00:02.264) 0:00:03.022 ****** 2025-09-29 06:07:25.676339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:07:25.676352 | orchestrator | 2025-09-29 06:07:25.676358 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-29 06:07:25.676365 | orchestrator | Monday 29 September 2025 06:06:20 +0000 (0:00:02.125) 0:00:05.148 ****** 2025-09-29 06:07:25.676372 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:07:25.676378 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:07:25.676385 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:07:25.676391 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:07:25.676398 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:07:25.676408 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:07:25.676415 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.676421 | orchestrator | 2025-09-29 06:07:25.676428 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-29 06:07:25.676435 | orchestrator | Monday 29 September 2025 06:06:22 +0000 (0:00:01.963) 0:00:07.111 ****** 2025-09-29 06:07:25.676441 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:07:25.676448 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:07:25.676454 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:07:25.676461 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:07:25.676467 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:07:25.676474 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:07:25.676480 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.676487 | orchestrator | 2025-09-29 06:07:25.676493 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-29 06:07:25.676504 | orchestrator | Monday 29 September 2025 06:06:26 +0000 (0:00:04.014) 0:00:11.126 ****** 2025-09-29 06:07:25.676511 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:07:25.676517 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:07:25.676524 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676530 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:07:25.676537 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:07:25.676544 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:07:25.676550 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:07:25.676557 | orchestrator | 2025-09-29 06:07:25.676563 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-29 06:07:25.676570 | orchestrator | Monday 29 September 2025 06:06:29 +0000 (0:00:02.667) 0:00:13.793 ****** 2025-09-29 06:07:25.676577 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:07:25.676583 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:07:25.676589 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:07:25.676596 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:07:25.676603 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676609 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:07:25.676615 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:07:25.676622 | orchestrator | 2025-09-29 06:07:25.676629 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-29 06:07:25.676635 | orchestrator | Monday 29 September 2025 06:06:39 +0000 (0:00:10.457) 0:00:24.251 ****** 2025-09-29 06:07:25.676642 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:07:25.676648 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:07:25.676655 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:07:25.676661 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:07:25.676668 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:07:25.676674 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:07:25.676681 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676687 | orchestrator | 2025-09-29 06:07:25.676694 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-29 06:07:25.676700 | orchestrator | Monday 29 September 2025 06:07:05 +0000 (0:00:25.640) 0:00:49.891 ****** 2025-09-29 06:07:25.676708 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:07:25.676716 | orchestrator | 2025-09-29 06:07:25.676722 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-29 06:07:25.676729 | orchestrator | Monday 29 September 2025 06:07:07 +0000 (0:00:01.985) 0:00:51.877 ****** 2025-09-29 06:07:25.676735 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-29 06:07:25.676742 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-29 06:07:25.676749 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-29 06:07:25.676779 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-29 06:07:25.676786 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-29 06:07:25.676793 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-29 06:07:25.676799 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-29 06:07:25.676806 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-29 06:07:25.676813 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-29 06:07:25.676822 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-29 06:07:25.676829 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-29 06:07:25.676836 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-29 06:07:25.676842 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-29 06:07:25.676849 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-29 06:07:25.676855 | orchestrator | 2025-09-29 06:07:25.676862 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-29 06:07:25.676873 | orchestrator | Monday 29 September 2025 06:07:12 +0000 (0:00:05.328) 0:00:57.205 ****** 2025-09-29 06:07:25.676879 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.676886 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:07:25.676892 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:07:25.676899 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:07:25.676906 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:07:25.676912 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:07:25.676919 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:07:25.676926 | orchestrator | 2025-09-29 06:07:25.676932 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-29 06:07:25.676939 | orchestrator | Monday 29 September 2025 06:07:13 +0000 (0:00:01.141) 0:00:58.347 ****** 2025-09-29 06:07:25.676945 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.676952 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:07:25.676959 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:07:25.676965 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:07:25.676972 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:07:25.676978 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:07:25.676985 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:07:25.676991 | orchestrator | 2025-09-29 06:07:25.676998 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-29 06:07:25.677008 | orchestrator | Monday 29 September 2025 06:07:15 +0000 (0:00:01.372) 0:00:59.719 ****** 2025-09-29 06:07:25.677015 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.677022 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:07:25.677029 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:07:25.677035 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:07:25.677042 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:07:25.677048 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:07:25.677055 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:07:25.677061 | orchestrator | 2025-09-29 06:07:25.677068 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-29 06:07:25.677075 | orchestrator | Monday 29 September 2025 06:07:16 +0000 (0:00:01.384) 0:01:01.104 ****** 2025-09-29 06:07:25.677081 | orchestrator | ok: [testbed-manager] 2025-09-29 06:07:25.677088 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:07:25.677094 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:07:25.677101 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:07:25.677107 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:07:25.677114 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:07:25.677120 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:07:25.677127 | orchestrator | 2025-09-29 06:07:25.677134 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-29 06:07:25.677141 | orchestrator | Monday 29 September 2025 06:07:18 +0000 (0:00:01.875) 0:01:02.979 ****** 2025-09-29 06:07:25.677147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-29 06:07:25.677155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:07:25.677162 | orchestrator | 2025-09-29 06:07:25.677169 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-29 06:07:25.677175 | orchestrator | Monday 29 September 2025 06:07:19 +0000 (0:00:01.306) 0:01:04.286 ****** 2025-09-29 06:07:25.677182 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.677188 | orchestrator | 2025-09-29 06:07:25.677195 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-29 06:07:25.677202 | orchestrator | Monday 29 September 2025 06:07:21 +0000 (0:00:02.098) 0:01:06.385 ****** 2025-09-29 06:07:25.677208 | orchestrator | changed: [testbed-manager] 2025-09-29 06:07:25.677215 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:07:25.677221 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:07:25.677231 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:07:25.677238 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:07:25.677244 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:07:25.677251 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:07:25.677258 | orchestrator | 2025-09-29 06:07:25.677264 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:07:25.677271 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677278 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677284 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677291 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677298 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677304 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677317 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:07:25.677324 | orchestrator | 2025-09-29 06:07:25.677330 | orchestrator | 2025-09-29 06:07:25.677337 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:07:25.677344 | orchestrator | Monday 29 September 2025 06:07:24 +0000 (0:00:02.922) 0:01:09.307 ****** 2025-09-29 06:07:25.677350 | orchestrator | =============================================================================== 2025-09-29 06:07:25.677357 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.64s 2025-09-29 06:07:25.677364 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.46s 2025-09-29 06:07:25.677370 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.33s 2025-09-29 06:07:25.677377 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.01s 2025-09-29 06:07:25.677383 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.92s 2025-09-29 06:07:25.677390 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.67s 2025-09-29 06:07:25.677396 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.26s 2025-09-29 06:07:25.677403 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.13s 2025-09-29 06:07:25.677409 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.10s 2025-09-29 06:07:25.677416 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.99s 2025-09-29 06:07:25.677423 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.96s 2025-09-29 06:07:25.677432 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.88s 2025-09-29 06:07:25.677439 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.38s 2025-09-29 06:07:25.677446 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.37s 2025-09-29 06:07:25.677452 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.31s 2025-09-29 06:07:25.677459 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.14s 2025-09-29 06:07:25.677465 | orchestrator | 2025-09-29 06:07:25 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state STARTED 2025-09-29 06:07:25.677472 | orchestrator | 2025-09-29 06:07:25 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:25.677484 | orchestrator | 2025-09-29 06:07:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:28.730156 | orchestrator | 2025-09-29 06:07:28 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:28.731708 | orchestrator | 2025-09-29 06:07:28 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:28.732738 | orchestrator | 2025-09-29 06:07:28 | INFO  | Task 14a30e18-56c8-4b64-9de8-27d7766815d8 is in state SUCCESS 2025-09-29 06:07:28.734907 | orchestrator | 2025-09-29 06:07:28 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:28.735236 | orchestrator | 2025-09-29 06:07:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:31.776549 | orchestrator | 2025-09-29 06:07:31 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:31.778474 | orchestrator | 2025-09-29 06:07:31 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:31.781047 | orchestrator | 2025-09-29 06:07:31 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:31.781109 | orchestrator | 2025-09-29 06:07:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:34.819248 | orchestrator | 2025-09-29 06:07:34 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:34.821517 | orchestrator | 2025-09-29 06:07:34 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:34.823327 | orchestrator | 2025-09-29 06:07:34 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:34.823918 | orchestrator | 2025-09-29 06:07:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:37.864237 | orchestrator | 2025-09-29 06:07:37 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:37.865961 | orchestrator | 2025-09-29 06:07:37 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:37.867349 | orchestrator | 2025-09-29 06:07:37 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:37.867379 | orchestrator | 2025-09-29 06:07:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:40.913090 | orchestrator | 2025-09-29 06:07:40 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:40.918264 | orchestrator | 2025-09-29 06:07:40 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:40.923543 | orchestrator | 2025-09-29 06:07:40 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:40.925818 | orchestrator | 2025-09-29 06:07:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:44.021384 | orchestrator | 2025-09-29 06:07:44 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:44.022827 | orchestrator | 2025-09-29 06:07:44 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:44.025474 | orchestrator | 2025-09-29 06:07:44 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:44.026342 | orchestrator | 2025-09-29 06:07:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:47.101438 | orchestrator | 2025-09-29 06:07:47 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:47.101523 | orchestrator | 2025-09-29 06:07:47 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:47.103366 | orchestrator | 2025-09-29 06:07:47 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:47.103427 | orchestrator | 2025-09-29 06:07:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:50.146545 | orchestrator | 2025-09-29 06:07:50 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:50.147001 | orchestrator | 2025-09-29 06:07:50 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:50.148477 | orchestrator | 2025-09-29 06:07:50 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:50.148519 | orchestrator | 2025-09-29 06:07:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:53.185140 | orchestrator | 2025-09-29 06:07:53 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:53.185532 | orchestrator | 2025-09-29 06:07:53 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:53.186339 | orchestrator | 2025-09-29 06:07:53 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:53.186399 | orchestrator | 2025-09-29 06:07:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:56.226378 | orchestrator | 2025-09-29 06:07:56 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:56.227268 | orchestrator | 2025-09-29 06:07:56 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:56.229976 | orchestrator | 2025-09-29 06:07:56 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:56.230013 | orchestrator | 2025-09-29 06:07:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:07:59.270458 | orchestrator | 2025-09-29 06:07:59 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:07:59.271102 | orchestrator | 2025-09-29 06:07:59 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:07:59.272317 | orchestrator | 2025-09-29 06:07:59 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:07:59.272327 | orchestrator | 2025-09-29 06:07:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:02.324874 | orchestrator | 2025-09-29 06:08:02 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:02.326948 | orchestrator | 2025-09-29 06:08:02 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:02.329528 | orchestrator | 2025-09-29 06:08:02 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:02.329562 | orchestrator | 2025-09-29 06:08:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:05.377566 | orchestrator | 2025-09-29 06:08:05 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:05.383599 | orchestrator | 2025-09-29 06:08:05 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:05.385549 | orchestrator | 2025-09-29 06:08:05 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:05.385785 | orchestrator | 2025-09-29 06:08:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:08.428444 | orchestrator | 2025-09-29 06:08:08 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:08.432436 | orchestrator | 2025-09-29 06:08:08 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:08.432513 | orchestrator | 2025-09-29 06:08:08 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:08.432549 | orchestrator | 2025-09-29 06:08:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:11.485169 | orchestrator | 2025-09-29 06:08:11 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:11.487044 | orchestrator | 2025-09-29 06:08:11 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:11.488351 | orchestrator | 2025-09-29 06:08:11 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:11.488849 | orchestrator | 2025-09-29 06:08:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:14.520576 | orchestrator | 2025-09-29 06:08:14 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:14.521006 | orchestrator | 2025-09-29 06:08:14 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:14.521640 | orchestrator | 2025-09-29 06:08:14 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:14.521673 | orchestrator | 2025-09-29 06:08:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:17.562320 | orchestrator | 2025-09-29 06:08:17 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:17.564301 | orchestrator | 2025-09-29 06:08:17 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:17.569110 | orchestrator | 2025-09-29 06:08:17 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:17.569501 | orchestrator | 2025-09-29 06:08:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:20.610309 | orchestrator | 2025-09-29 06:08:20 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:20.612311 | orchestrator | 2025-09-29 06:08:20 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:20.613110 | orchestrator | 2025-09-29 06:08:20 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:20.613139 | orchestrator | 2025-09-29 06:08:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:23.659991 | orchestrator | 2025-09-29 06:08:23 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:23.663738 | orchestrator | 2025-09-29 06:08:23 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:23.665424 | orchestrator | 2025-09-29 06:08:23 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state STARTED 2025-09-29 06:08:23.665456 | orchestrator | 2025-09-29 06:08:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:26.693944 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:26.694390 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:26.694965 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:26.698529 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:26.698938 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:26.700795 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:26.705149 | orchestrator | 2025-09-29 06:08:26 | INFO  | Task 01ee4474-a785-4e47-bca0-dda35d9f211c is in state SUCCESS 2025-09-29 06:08:26.705295 | orchestrator | 2025-09-29 06:08:26.705313 | orchestrator | 2025-09-29 06:08:26.705434 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-29 06:08:26.705450 | orchestrator | 2025-09-29 06:08:26.705461 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-29 06:08:26.705473 | orchestrator | Monday 29 September 2025 06:06:33 +0000 (0:00:00.213) 0:00:00.213 ****** 2025-09-29 06:08:26.705484 | orchestrator | ok: [testbed-manager] 2025-09-29 06:08:26.705496 | orchestrator | 2025-09-29 06:08:26.705507 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-29 06:08:26.705518 | orchestrator | Monday 29 September 2025 06:06:34 +0000 (0:00:00.894) 0:00:01.107 ****** 2025-09-29 06:08:26.705530 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-29 06:08:26.705541 | orchestrator | 2025-09-29 06:08:26.705551 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-29 06:08:26.705562 | orchestrator | Monday 29 September 2025 06:06:34 +0000 (0:00:00.533) 0:00:01.641 ****** 2025-09-29 06:08:26.705573 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.705583 | orchestrator | 2025-09-29 06:08:26.705595 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-29 06:08:26.705605 | orchestrator | Monday 29 September 2025 06:06:36 +0000 (0:00:01.283) 0:00:02.924 ****** 2025-09-29 06:08:26.705616 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-29 06:08:26.705627 | orchestrator | ok: [testbed-manager] 2025-09-29 06:08:26.705638 | orchestrator | 2025-09-29 06:08:26.705648 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-29 06:08:26.705659 | orchestrator | Monday 29 September 2025 06:07:24 +0000 (0:00:48.222) 0:00:51.147 ****** 2025-09-29 06:08:26.705711 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.705723 | orchestrator | 2025-09-29 06:08:26.705734 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:08:26.705817 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:26.705831 | orchestrator | 2025-09-29 06:08:26.705842 | orchestrator | 2025-09-29 06:08:26.705853 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:08:26.705864 | orchestrator | Monday 29 September 2025 06:07:28 +0000 (0:00:03.564) 0:00:54.711 ****** 2025-09-29 06:08:26.705875 | orchestrator | =============================================================================== 2025-09-29 06:08:26.705886 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 48.22s 2025-09-29 06:08:26.705897 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.56s 2025-09-29 06:08:26.705907 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.28s 2025-09-29 06:08:26.705918 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.89s 2025-09-29 06:08:26.705931 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.53s 2025-09-29 06:08:26.705944 | orchestrator | 2025-09-29 06:08:26.707475 | orchestrator | 2025-09-29 06:08:26.707516 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-29 06:08:26.707528 | orchestrator | 2025-09-29 06:08:26.707547 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-29 06:08:26.707559 | orchestrator | Monday 29 September 2025 06:06:08 +0000 (0:00:00.313) 0:00:00.313 ****** 2025-09-29 06:08:26.707571 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:08:26.707583 | orchestrator | 2025-09-29 06:08:26.707593 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-29 06:08:26.707604 | orchestrator | Monday 29 September 2025 06:06:09 +0000 (0:00:01.428) 0:00:01.741 ****** 2025-09-29 06:08:26.707615 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707642 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707654 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707664 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707675 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707686 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707696 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707708 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707718 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707729 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707812 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707827 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-29 06:08:26.707838 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707848 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707859 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707870 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707881 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707891 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707902 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-29 06:08:26.707913 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707923 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-29 06:08:26.707934 | orchestrator | 2025-09-29 06:08:26.707944 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-29 06:08:26.707955 | orchestrator | Monday 29 September 2025 06:06:14 +0000 (0:00:04.224) 0:00:05.966 ****** 2025-09-29 06:08:26.707971 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:08:26.707984 | orchestrator | 2025-09-29 06:08:26.707994 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-29 06:08:26.708005 | orchestrator | Monday 29 September 2025 06:06:15 +0000 (0:00:01.237) 0:00:07.204 ****** 2025-09-29 06:08:26.708020 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.708156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708171 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708252 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708283 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708329 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.708404 | orchestrator | 2025-09-29 06:08:26.708415 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-29 06:08:26.708426 | orchestrator | Monday 29 September 2025 06:06:19 +0000 (0:00:04.712) 0:00:11.916 ****** 2025-09-29 06:08:26.708438 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708455 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708466 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708484 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:08:26.708496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708663 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:08:26.708674 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:08:26.708685 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:08:26.708696 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:08:26.708708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708784 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:08:26.708797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708838 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:08:26.708849 | orchestrator | 2025-09-29 06:08:26.708860 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-29 06:08:26.708871 | orchestrator | Monday 29 September 2025 06:06:21 +0000 (0:00:01.171) 0:00:13.088 ****** 2025-09-29 06:08:26.708883 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708894 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708906 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.708937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.708959 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:08:26.708970 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:08:26.708999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.709012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709035 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:08:26.709046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.709058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709093 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:08:26.709104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.709122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709145 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:08:26.709156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.709167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-29 06:08:26.709213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.709236 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:08:26.709247 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:08:26.709258 | orchestrator | 2025-09-29 06:08:26.709269 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-29 06:08:26.709280 | orchestrator | Monday 29 September 2025 06:06:23 +0000 (0:00:02.294) 0:00:15.382 ****** 2025-09-29 06:08:26.709291 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:08:26.709302 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:08:26.709312 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:08:26.709323 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:08:26.709334 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:08:26.709350 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:08:26.709361 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:08:26.709372 | orchestrator | 2025-09-29 06:08:26.709383 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-29 06:08:26.709394 | orchestrator | Monday 29 September 2025 06:06:24 +0000 (0:00:01.040) 0:00:16.423 ****** 2025-09-29 06:08:26.709405 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:08:26.709415 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:08:26.709426 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:08:26.709436 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:08:26.709447 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:08:26.709458 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:08:26.709468 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:08:26.709479 | orchestrator | 2025-09-29 06:08:26.709489 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-29 06:08:26.709500 | orchestrator | Monday 29 September 2025 06:06:25 +0000 (0:00:01.363) 0:00:17.786 ****** 2025-09-29 06:08:26.709511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709532 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.709631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709670 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709682 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709789 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709817 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.709828 | orchestrator | 2025-09-29 06:08:26.709839 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-29 06:08:26.709850 | orchestrator | Monday 29 September 2025 06:06:32 +0000 (0:00:06.685) 0:00:24.472 ****** 2025-09-29 06:08:26.709861 | orchestrator | [WARNING]: Skipped 2025-09-29 06:08:26.709873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-29 06:08:26.709884 | orchestrator | to this access issue: 2025-09-29 06:08:26.709894 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-29 06:08:26.709905 | orchestrator | directory 2025-09-29 06:08:26.709916 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:08:26.709927 | orchestrator | 2025-09-29 06:08:26.709938 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-29 06:08:26.709948 | orchestrator | Monday 29 September 2025 06:06:33 +0000 (0:00:01.272) 0:00:25.744 ****** 2025-09-29 06:08:26.709959 | orchestrator | [WARNING]: Skipped 2025-09-29 06:08:26.709970 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-29 06:08:26.709994 | orchestrator | to this access issue: 2025-09-29 06:08:26.710005 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-29 06:08:26.710076 | orchestrator | directory 2025-09-29 06:08:26.710091 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:08:26.710102 | orchestrator | 2025-09-29 06:08:26.710113 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-29 06:08:26.710124 | orchestrator | Monday 29 September 2025 06:06:34 +0000 (0:00:01.039) 0:00:26.784 ****** 2025-09-29 06:08:26.710135 | orchestrator | [WARNING]: Skipped 2025-09-29 06:08:26.710146 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-29 06:08:26.710157 | orchestrator | to this access issue: 2025-09-29 06:08:26.710167 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-29 06:08:26.710178 | orchestrator | directory 2025-09-29 06:08:26.710189 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:08:26.710200 | orchestrator | 2025-09-29 06:08:26.710211 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-29 06:08:26.710222 | orchestrator | Monday 29 September 2025 06:06:35 +0000 (0:00:00.893) 0:00:27.677 ****** 2025-09-29 06:08:26.710232 | orchestrator | [WARNING]: Skipped 2025-09-29 06:08:26.710243 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-29 06:08:26.710254 | orchestrator | to this access issue: 2025-09-29 06:08:26.710265 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-29 06:08:26.710275 | orchestrator | directory 2025-09-29 06:08:26.710286 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:08:26.710297 | orchestrator | 2025-09-29 06:08:26.710308 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-29 06:08:26.710318 | orchestrator | Monday 29 September 2025 06:06:36 +0000 (0:00:00.646) 0:00:28.323 ****** 2025-09-29 06:08:26.710329 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.710340 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.710351 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.710361 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.710372 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.710383 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.710394 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.710404 | orchestrator | 2025-09-29 06:08:26.710415 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-29 06:08:26.710426 | orchestrator | Monday 29 September 2025 06:06:39 +0000 (0:00:03.481) 0:00:31.804 ****** 2025-09-29 06:08:26.710437 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710448 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710459 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710469 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710491 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710502 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-29 06:08:26.710513 | orchestrator | 2025-09-29 06:08:26.710523 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-29 06:08:26.710534 | orchestrator | Monday 29 September 2025 06:06:43 +0000 (0:00:03.198) 0:00:35.003 ****** 2025-09-29 06:08:26.710545 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.710564 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.710575 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.710591 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.710602 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.710613 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.710623 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.710634 | orchestrator | 2025-09-29 06:08:26.710645 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-29 06:08:26.710656 | orchestrator | Monday 29 September 2025 06:06:45 +0000 (0:00:02.600) 0:00:37.603 ****** 2025-09-29 06:08:26.710667 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710698 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710721 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710775 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710794 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710806 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710836 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710847 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710870 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710900 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.710917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:08:26.710936 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710948 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710960 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710971 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.710982 | orchestrator | 2025-09-29 06:08:26.710993 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-29 06:08:26.711004 | orchestrator | Monday 29 September 2025 06:06:47 +0000 (0:00:02.193) 0:00:39.796 ****** 2025-09-29 06:08:26.711020 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711032 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711043 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711054 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711075 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711086 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-29 06:08:26.711096 | orchestrator | 2025-09-29 06:08:26.711107 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-29 06:08:26.711118 | orchestrator | Monday 29 September 2025 06:06:52 +0000 (0:00:04.305) 0:00:44.101 ****** 2025-09-29 06:08:26.711129 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711139 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711165 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711176 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711187 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711198 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-29 06:08:26.711208 | orchestrator | 2025-09-29 06:08:26.711219 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-29 06:08:26.711230 | orchestrator | Monday 29 September 2025 06:06:55 +0000 (0:00:03.099) 0:00:47.201 ****** 2025-09-29 06:08:26.711241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711311 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711338 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-29 06:08:26.711389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711446 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711511 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711543 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:08:26.711554 | orchestrator | 2025-09-29 06:08:26.711565 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-29 06:08:26.711576 | orchestrator | Monday 29 September 2025 06:06:59 +0000 (0:00:03.902) 0:00:51.103 ****** 2025-09-29 06:08:26.711587 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.711598 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.711609 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.711619 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.711630 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.711641 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.711651 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.711662 | orchestrator | 2025-09-29 06:08:26.711673 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-29 06:08:26.711684 | orchestrator | Monday 29 September 2025 06:07:00 +0000 (0:00:01.484) 0:00:52.588 ****** 2025-09-29 06:08:26.711694 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.711705 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.711715 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.711727 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.711737 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.711799 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.711817 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.711834 | orchestrator | 2025-09-29 06:08:26.711853 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.711864 | orchestrator | Monday 29 September 2025 06:07:01 +0000 (0:00:01.199) 0:00:53.788 ****** 2025-09-29 06:08:26.711875 | orchestrator | 2025-09-29 06:08:26.711886 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.711896 | orchestrator | Monday 29 September 2025 06:07:01 +0000 (0:00:00.073) 0:00:53.861 ****** 2025-09-29 06:08:26.711907 | orchestrator | 2025-09-29 06:08:26.711918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.711928 | orchestrator | Monday 29 September 2025 06:07:01 +0000 (0:00:00.094) 0:00:53.956 ****** 2025-09-29 06:08:26.711939 | orchestrator | 2025-09-29 06:08:26.711950 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.711961 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.074) 0:00:54.030 ****** 2025-09-29 06:08:26.711972 | orchestrator | 2025-09-29 06:08:26.711983 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.711994 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.181) 0:00:54.212 ****** 2025-09-29 06:08:26.712013 | orchestrator | 2025-09-29 06:08:26.712024 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.712035 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.088) 0:00:54.300 ****** 2025-09-29 06:08:26.712045 | orchestrator | 2025-09-29 06:08:26.712056 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-29 06:08:26.712066 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.085) 0:00:54.385 ****** 2025-09-29 06:08:26.712077 | orchestrator | 2025-09-29 06:08:26.712088 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-29 06:08:26.712107 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:00.145) 0:00:54.530 ****** 2025-09-29 06:08:26.712118 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.712129 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.712140 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.712151 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.712161 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.712172 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.712182 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.712193 | orchestrator | 2025-09-29 06:08:26.712204 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-29 06:08:26.712215 | orchestrator | Monday 29 September 2025 06:07:40 +0000 (0:00:38.134) 0:01:32.665 ****** 2025-09-29 06:08:26.712228 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.712246 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.712262 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.712280 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.712297 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.712314 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.712331 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.712349 | orchestrator | 2025-09-29 06:08:26.712364 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-29 06:08:26.712382 | orchestrator | Monday 29 September 2025 06:08:13 +0000 (0:00:33.163) 0:02:05.829 ****** 2025-09-29 06:08:26.712400 | orchestrator | ok: [testbed-manager] 2025-09-29 06:08:26.712418 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:08:26.712438 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:08:26.712456 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:08:26.712475 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:08:26.712494 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:08:26.712512 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:08:26.712531 | orchestrator | 2025-09-29 06:08:26.712550 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-29 06:08:26.712568 | orchestrator | Monday 29 September 2025 06:08:15 +0000 (0:00:01.802) 0:02:07.631 ****** 2025-09-29 06:08:26.712585 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:26.712604 | orchestrator | changed: [testbed-manager] 2025-09-29 06:08:26.712623 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:26.712642 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:08:26.712660 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:26.712679 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:08:26.712698 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:08:26.712716 | orchestrator | 2025-09-29 06:08:26.712733 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:08:26.712780 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712801 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712821 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712841 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712874 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712894 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712913 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-29 06:08:26.712930 | orchestrator | 2025-09-29 06:08:26.712948 | orchestrator | 2025-09-29 06:08:26.712975 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:08:26.712995 | orchestrator | Monday 29 September 2025 06:08:24 +0000 (0:00:08.956) 0:02:16.588 ****** 2025-09-29 06:08:26.713012 | orchestrator | =============================================================================== 2025-09-29 06:08:26.713030 | orchestrator | common : Restart fluentd container ------------------------------------- 38.14s 2025-09-29 06:08:26.713048 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 33.16s 2025-09-29 06:08:26.713066 | orchestrator | common : Restart cron container ----------------------------------------- 8.96s 2025-09-29 06:08:26.713086 | orchestrator | common : Copying over config.json files for services -------------------- 6.69s 2025-09-29 06:08:26.713104 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.71s 2025-09-29 06:08:26.713122 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.31s 2025-09-29 06:08:26.713140 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.22s 2025-09-29 06:08:26.713160 | orchestrator | common : Check common containers ---------------------------------------- 3.90s 2025-09-29 06:08:26.713177 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.48s 2025-09-29 06:08:26.713196 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.20s 2025-09-29 06:08:26.713207 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.10s 2025-09-29 06:08:26.713218 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.60s 2025-09-29 06:08:26.713228 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.29s 2025-09-29 06:08:26.713239 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.19s 2025-09-29 06:08:26.713259 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.80s 2025-09-29 06:08:26.713271 | orchestrator | common : Creating log volume -------------------------------------------- 1.48s 2025-09-29 06:08:26.713281 | orchestrator | common : include_tasks -------------------------------------------------- 1.43s 2025-09-29 06:08:26.713292 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.36s 2025-09-29 06:08:26.713303 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.27s 2025-09-29 06:08:26.713314 | orchestrator | common : include_tasks -------------------------------------------------- 1.24s 2025-09-29 06:08:26.713325 | orchestrator | 2025-09-29 06:08:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:29.734167 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:29.734358 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:29.735078 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:29.735737 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:29.736439 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:29.737310 | orchestrator | 2025-09-29 06:08:29 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:29.737336 | orchestrator | 2025-09-29 06:08:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:32.761276 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:32.761409 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:32.763317 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:32.763686 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:32.764265 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:32.764815 | orchestrator | 2025-09-29 06:08:32 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:32.764915 | orchestrator | 2025-09-29 06:08:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:35.794305 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:35.795331 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:35.795815 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:35.796435 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:35.797698 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:35.798075 | orchestrator | 2025-09-29 06:08:35 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:35.798174 | orchestrator | 2025-09-29 06:08:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:38.830295 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:38.831571 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:38.832043 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:38.832711 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:38.833225 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:38.833876 | orchestrator | 2025-09-29 06:08:38 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:38.833909 | orchestrator | 2025-09-29 06:08:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:42.044523 | orchestrator | 2025-09-29 06:08:41 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:42.044610 | orchestrator | 2025-09-29 06:08:41 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:42.044627 | orchestrator | 2025-09-29 06:08:41 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:42.044641 | orchestrator | 2025-09-29 06:08:41 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state STARTED 2025-09-29 06:08:42.044654 | orchestrator | 2025-09-29 06:08:42 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:42.044696 | orchestrator | 2025-09-29 06:08:42 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:42.044712 | orchestrator | 2025-09-29 06:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:45.073275 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:45.073373 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:45.073385 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:45.073396 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:08:45.073406 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task 8aa168ba-6ac1-4699-a7d7-4f1bb3a36065 is in state SUCCESS 2025-09-29 06:08:45.073416 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:45.073426 | orchestrator | 2025-09-29 06:08:45 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:45.073436 | orchestrator | 2025-09-29 06:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:48.104162 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:48.104264 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:48.105502 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:48.109903 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:08:48.109936 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:48.109947 | orchestrator | 2025-09-29 06:08:48 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:48.109958 | orchestrator | 2025-09-29 06:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:51.138817 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:51.138965 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:51.141327 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:51.145620 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:08:51.146160 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state STARTED 2025-09-29 06:08:51.146966 | orchestrator | 2025-09-29 06:08:51 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:51.146991 | orchestrator | 2025-09-29 06:08:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:54.191193 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:54.191283 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:54.192759 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:54.192906 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:08:54.193632 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task 62768481-5863-41bd-ab36-12a40baec238 is in state SUCCESS 2025-09-29 06:08:54.194987 | orchestrator | 2025-09-29 06:08:54.195048 | orchestrator | 2025-09-29 06:08:54.195068 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:08:54.195087 | orchestrator | 2025-09-29 06:08:54.195104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:08:54.195122 | orchestrator | Monday 29 September 2025 06:08:29 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-29 06:08:54.195141 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:08:54.195160 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:08:54.195178 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:08:54.195195 | orchestrator | 2025-09-29 06:08:54.195214 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:08:54.195231 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.298) 0:00:00.557 ****** 2025-09-29 06:08:54.195249 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-29 06:08:54.195267 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-29 06:08:54.195285 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-29 06:08:54.195302 | orchestrator | 2025-09-29 06:08:54.195321 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-29 06:08:54.195339 | orchestrator | 2025-09-29 06:08:54.195357 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-29 06:08:54.195375 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.362) 0:00:00.920 ****** 2025-09-29 06:08:54.195387 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:08:54.195400 | orchestrator | 2025-09-29 06:08:54.195411 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-29 06:08:54.195422 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.719) 0:00:01.639 ****** 2025-09-29 06:08:54.195433 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-29 06:08:54.195444 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-29 06:08:54.195454 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-29 06:08:54.195465 | orchestrator | 2025-09-29 06:08:54.195475 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-29 06:08:54.195486 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.701) 0:00:02.341 ****** 2025-09-29 06:08:54.195496 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-29 06:08:54.195507 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-29 06:08:54.195518 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-29 06:08:54.195528 | orchestrator | 2025-09-29 06:08:54.195539 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-29 06:08:54.195550 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:02.043) 0:00:04.384 ****** 2025-09-29 06:08:54.195560 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:54.195571 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:54.195582 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:54.195592 | orchestrator | 2025-09-29 06:08:54.195603 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-29 06:08:54.195614 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:02.170) 0:00:06.554 ****** 2025-09-29 06:08:54.195624 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:54.195635 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:54.195645 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:54.195656 | orchestrator | 2025-09-29 06:08:54.195666 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:08:54.195677 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.195690 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.195719 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.195730 | orchestrator | 2025-09-29 06:08:54.195785 | orchestrator | 2025-09-29 06:08:54.195797 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:08:54.195808 | orchestrator | Monday 29 September 2025 06:08:43 +0000 (0:00:07.012) 0:00:13.567 ****** 2025-09-29 06:08:54.195818 | orchestrator | =============================================================================== 2025-09-29 06:08:54.195829 | orchestrator | memcached : Restart memcached container --------------------------------- 7.01s 2025-09-29 06:08:54.195855 | orchestrator | memcached : Check memcached container ----------------------------------- 2.17s 2025-09-29 06:08:54.195865 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.04s 2025-09-29 06:08:54.195876 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.72s 2025-09-29 06:08:54.195887 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.70s 2025-09-29 06:08:54.195897 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2025-09-29 06:08:54.195908 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-29 06:08:54.195918 | orchestrator | 2025-09-29 06:08:54.195929 | orchestrator | 2025-09-29 06:08:54.195940 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:08:54.195950 | orchestrator | 2025-09-29 06:08:54.195961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:08:54.195971 | orchestrator | Monday 29 September 2025 06:08:29 +0000 (0:00:00.250) 0:00:00.250 ****** 2025-09-29 06:08:54.195982 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:08:54.195992 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:08:54.196003 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:08:54.196014 | orchestrator | 2025-09-29 06:08:54.196024 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:08:54.196052 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.282) 0:00:00.533 ****** 2025-09-29 06:08:54.196063 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-29 06:08:54.196074 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-29 06:08:54.196085 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-29 06:08:54.196096 | orchestrator | 2025-09-29 06:08:54.196106 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-29 06:08:54.196117 | orchestrator | 2025-09-29 06:08:54.196128 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-29 06:08:54.196138 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.615) 0:00:01.148 ****** 2025-09-29 06:08:54.196149 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:08:54.196159 | orchestrator | 2025-09-29 06:08:54.196170 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-29 06:08:54.196181 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.709) 0:00:01.858 ****** 2025-09-29 06:08:54.196194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196339 | orchestrator | 2025-09-29 06:08:54.196359 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-29 06:08:54.196378 | orchestrator | Monday 29 September 2025 06:08:32 +0000 (0:00:01.267) 0:00:03.126 ****** 2025-09-29 06:08:54.196396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196545 | orchestrator | 2025-09-29 06:08:54.196564 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-29 06:08:54.196583 | orchestrator | Monday 29 September 2025 06:08:35 +0000 (0:00:02.884) 0:00:06.011 ****** 2025-09-29 06:08:54.196602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196696 | orchestrator | 2025-09-29 06:08:54.196714 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-29 06:08:54.196725 | orchestrator | Monday 29 September 2025 06:08:38 +0000 (0:00:03.084) 0:00:09.095 ****** 2025-09-29 06:08:54.196780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-29 06:08:54.196871 | orchestrator | 2025-09-29 06:08:54.196882 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-29 06:08:54.196892 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:01.936) 0:00:11.031 ****** 2025-09-29 06:08:54.196903 | orchestrator | 2025-09-29 06:08:54.196914 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-29 06:08:54.196932 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:00.080) 0:00:11.112 ****** 2025-09-29 06:08:54.196943 | orchestrator | 2025-09-29 06:08:54.196954 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-29 06:08:54.196964 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:00.056) 0:00:11.168 ****** 2025-09-29 06:08:54.196981 | orchestrator | 2025-09-29 06:08:54.196992 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-29 06:08:54.197003 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:00.157) 0:00:11.325 ****** 2025-09-29 06:08:54.197014 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:54.197024 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:54.197035 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:54.197045 | orchestrator | 2025-09-29 06:08:54.197056 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-29 06:08:54.197067 | orchestrator | Monday 29 September 2025 06:08:44 +0000 (0:00:03.425) 0:00:14.751 ****** 2025-09-29 06:08:54.197077 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:08:54.197088 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:08:54.197099 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:08:54.197109 | orchestrator | 2025-09-29 06:08:54.197120 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:08:54.197131 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.197142 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.197153 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:08:54.197164 | orchestrator | 2025-09-29 06:08:54.197175 | orchestrator | 2025-09-29 06:08:54.197186 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:08:54.197196 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:08.944) 0:00:23.696 ****** 2025-09-29 06:08:54.197207 | orchestrator | =============================================================================== 2025-09-29 06:08:54.197218 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.94s 2025-09-29 06:08:54.197235 | orchestrator | redis : Restart redis container ----------------------------------------- 3.43s 2025-09-29 06:08:54.197253 | orchestrator | redis : Copying over redis config files --------------------------------- 3.08s 2025-09-29 06:08:54.197271 | orchestrator | redis : Copying over default config.json files -------------------------- 2.88s 2025-09-29 06:08:54.197289 | orchestrator | redis : Check redis containers ------------------------------------------ 1.94s 2025-09-29 06:08:54.197308 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.27s 2025-09-29 06:08:54.197326 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2025-09-29 06:08:54.197343 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-09-29 06:08:54.197355 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.29s 2025-09-29 06:08:54.197365 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-29 06:08:54.197376 | orchestrator | 2025-09-29 06:08:54 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:54.197387 | orchestrator | 2025-09-29 06:08:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:08:57.235317 | orchestrator | 2025-09-29 06:08:57 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:08:57.235540 | orchestrator | 2025-09-29 06:08:57 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:08:57.236967 | orchestrator | 2025-09-29 06:08:57 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:08:57.239174 | orchestrator | 2025-09-29 06:08:57 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:08:57.240288 | orchestrator | 2025-09-29 06:08:57 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:08:57.240389 | orchestrator | 2025-09-29 06:08:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:00.292696 | orchestrator | 2025-09-29 06:09:00 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:00.292852 | orchestrator | 2025-09-29 06:09:00 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:00.294395 | orchestrator | 2025-09-29 06:09:00 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:00.296366 | orchestrator | 2025-09-29 06:09:00 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:00.296816 | orchestrator | 2025-09-29 06:09:00 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:00.297618 | orchestrator | 2025-09-29 06:09:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:03.354378 | orchestrator | 2025-09-29 06:09:03 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:03.359041 | orchestrator | 2025-09-29 06:09:03 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:03.359120 | orchestrator | 2025-09-29 06:09:03 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:03.362567 | orchestrator | 2025-09-29 06:09:03 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:03.365862 | orchestrator | 2025-09-29 06:09:03 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:03.365911 | orchestrator | 2025-09-29 06:09:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:06.465434 | orchestrator | 2025-09-29 06:09:06 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:06.465607 | orchestrator | 2025-09-29 06:09:06 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:06.466806 | orchestrator | 2025-09-29 06:09:06 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:06.467025 | orchestrator | 2025-09-29 06:09:06 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:06.467878 | orchestrator | 2025-09-29 06:09:06 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:06.467908 | orchestrator | 2025-09-29 06:09:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:09.499136 | orchestrator | 2025-09-29 06:09:09 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:09.504672 | orchestrator | 2025-09-29 06:09:09 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:09.505094 | orchestrator | 2025-09-29 06:09:09 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:09.507264 | orchestrator | 2025-09-29 06:09:09 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:09.507964 | orchestrator | 2025-09-29 06:09:09 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:09.508015 | orchestrator | 2025-09-29 06:09:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:12.554240 | orchestrator | 2025-09-29 06:09:12 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:12.555711 | orchestrator | 2025-09-29 06:09:12 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:12.557474 | orchestrator | 2025-09-29 06:09:12 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:12.559177 | orchestrator | 2025-09-29 06:09:12 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:12.560876 | orchestrator | 2025-09-29 06:09:12 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:12.560955 | orchestrator | 2025-09-29 06:09:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:15.674505 | orchestrator | 2025-09-29 06:09:15 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:15.675306 | orchestrator | 2025-09-29 06:09:15 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:15.676386 | orchestrator | 2025-09-29 06:09:15 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:15.677330 | orchestrator | 2025-09-29 06:09:15 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:15.678805 | orchestrator | 2025-09-29 06:09:15 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:15.678845 | orchestrator | 2025-09-29 06:09:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:18.722084 | orchestrator | 2025-09-29 06:09:18 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:18.724598 | orchestrator | 2025-09-29 06:09:18 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:18.726435 | orchestrator | 2025-09-29 06:09:18 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:18.726975 | orchestrator | 2025-09-29 06:09:18 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:18.727611 | orchestrator | 2025-09-29 06:09:18 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:18.727662 | orchestrator | 2025-09-29 06:09:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:21.776680 | orchestrator | 2025-09-29 06:09:21 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:21.777319 | orchestrator | 2025-09-29 06:09:21 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:21.781188 | orchestrator | 2025-09-29 06:09:21 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:21.782305 | orchestrator | 2025-09-29 06:09:21 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:21.783472 | orchestrator | 2025-09-29 06:09:21 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:21.783515 | orchestrator | 2025-09-29 06:09:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:25.177043 | orchestrator | 2025-09-29 06:09:25 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:25.177147 | orchestrator | 2025-09-29 06:09:25 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:25.177162 | orchestrator | 2025-09-29 06:09:25 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:25.177174 | orchestrator | 2025-09-29 06:09:25 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:25.177184 | orchestrator | 2025-09-29 06:09:25 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:25.177195 | orchestrator | 2025-09-29 06:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:28.378133 | orchestrator | 2025-09-29 06:09:28 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:28.378262 | orchestrator | 2025-09-29 06:09:28 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:28.378307 | orchestrator | 2025-09-29 06:09:28 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:28.378319 | orchestrator | 2025-09-29 06:09:28 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:28.378331 | orchestrator | 2025-09-29 06:09:28 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:28.378342 | orchestrator | 2025-09-29 06:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:31.407500 | orchestrator | 2025-09-29 06:09:31 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:31.408241 | orchestrator | 2025-09-29 06:09:31 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state STARTED 2025-09-29 06:09:31.410553 | orchestrator | 2025-09-29 06:09:31 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:31.410964 | orchestrator | 2025-09-29 06:09:31 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:31.411503 | orchestrator | 2025-09-29 06:09:31 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:31.411607 | orchestrator | 2025-09-29 06:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:34.451427 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:34.453924 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task c79894b0-bc1a-4e9a-8410-9b329942eac6 is in state SUCCESS 2025-09-29 06:09:34.454140 | orchestrator | 2025-09-29 06:09:34.455997 | orchestrator | 2025-09-29 06:09:34.456038 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:09:34.456047 | orchestrator | 2025-09-29 06:09:34.456059 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:09:34.456074 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.287) 0:00:00.287 ****** 2025-09-29 06:09:34.456082 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:34.456091 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:34.456098 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:34.456105 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:34.456112 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:34.456120 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:34.456127 | orchestrator | 2025-09-29 06:09:34.456180 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:09:34.456189 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.692) 0:00:00.980 ****** 2025-09-29 06:09:34.456196 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456204 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456211 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456218 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456225 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456232 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-29 06:09:34.456240 | orchestrator | 2025-09-29 06:09:34.456247 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-29 06:09:34.456254 | orchestrator | 2025-09-29 06:09:34.456261 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-29 06:09:34.456269 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.811) 0:00:01.791 ****** 2025-09-29 06:09:34.456277 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:09:34.456302 | orchestrator | 2025-09-29 06:09:34.456310 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-29 06:09:34.456317 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:01.424) 0:00:03.216 ****** 2025-09-29 06:09:34.456324 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-29 06:09:34.456335 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-29 06:09:34.456348 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-29 06:09:34.456360 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-29 06:09:34.456372 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-29 06:09:34.456383 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-29 06:09:34.456395 | orchestrator | 2025-09-29 06:09:34.456408 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-29 06:09:34.456420 | orchestrator | Monday 29 September 2025 06:08:34 +0000 (0:00:01.333) 0:00:04.549 ****** 2025-09-29 06:09:34.456434 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-29 06:09:34.456448 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-29 06:09:34.456460 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-29 06:09:34.456473 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-29 06:09:34.456485 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-29 06:09:34.456497 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-29 06:09:34.456509 | orchestrator | 2025-09-29 06:09:34.456520 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-29 06:09:34.456527 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:01.646) 0:00:06.196 ****** 2025-09-29 06:09:34.456535 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-29 06:09:34.456542 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:34.456550 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-29 06:09:34.456557 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:34.456564 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-29 06:09:34.456571 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:34.456578 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-29 06:09:34.456585 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:34.456594 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-29 06:09:34.456603 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:34.456611 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-29 06:09:34.456619 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:34.456627 | orchestrator | 2025-09-29 06:09:34.456638 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-29 06:09:34.456649 | orchestrator | Monday 29 September 2025 06:08:37 +0000 (0:00:01.855) 0:00:08.052 ****** 2025-09-29 06:09:34.456661 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:34.456673 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:34.456684 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:34.456696 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:34.456708 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:34.456720 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:34.456757 | orchestrator | 2025-09-29 06:09:34.456770 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-29 06:09:34.456784 | orchestrator | Monday 29 September 2025 06:08:38 +0000 (0:00:00.760) 0:00:08.812 ****** 2025-09-29 06:09:34.456844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456964 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.456989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457089 | orchestrator | 2025-09-29 06:09:34.457103 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-29 06:09:34.457116 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:02.246) 0:00:11.059 ****** 2025-09-29 06:09:34.457135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457162 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457329 | orchestrator | 2025-09-29 06:09:34.457337 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-29 06:09:34.457344 | orchestrator | Monday 29 September 2025 06:08:44 +0000 (0:00:03.037) 0:00:14.097 ****** 2025-09-29 06:09:34.457351 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:34.457359 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:34.457366 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:34.457373 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:34.457380 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:34.457387 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:34.457394 | orchestrator | 2025-09-29 06:09:34.457402 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-29 06:09:34.457409 | orchestrator | Monday 29 September 2025 06:08:45 +0000 (0:00:01.551) 0:00:15.648 ****** 2025-09-29 06:09:34.457417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-29 06:09:34.457538 | orchestrator | 2025-09-29 06:09:34.457545 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457553 | orchestrator | Monday 29 September 2025 06:08:47 +0000 (0:00:02.008) 0:00:17.657 ****** 2025-09-29 06:09:34.457560 | orchestrator | 2025-09-29 06:09:34.457567 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457575 | orchestrator | Monday 29 September 2025 06:08:47 +0000 (0:00:00.226) 0:00:17.883 ****** 2025-09-29 06:09:34.457582 | orchestrator | 2025-09-29 06:09:34.457589 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457596 | orchestrator | Monday 29 September 2025 06:08:47 +0000 (0:00:00.103) 0:00:17.987 ****** 2025-09-29 06:09:34.457603 | orchestrator | 2025-09-29 06:09:34.457611 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457618 | orchestrator | Monday 29 September 2025 06:08:48 +0000 (0:00:00.259) 0:00:18.246 ****** 2025-09-29 06:09:34.457625 | orchestrator | 2025-09-29 06:09:34.457632 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457639 | orchestrator | Monday 29 September 2025 06:08:48 +0000 (0:00:00.128) 0:00:18.375 ****** 2025-09-29 06:09:34.457646 | orchestrator | 2025-09-29 06:09:34.457653 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-29 06:09:34.457660 | orchestrator | Monday 29 September 2025 06:08:48 +0000 (0:00:00.222) 0:00:18.597 ****** 2025-09-29 06:09:34.457668 | orchestrator | 2025-09-29 06:09:34.457675 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-29 06:09:34.457686 | orchestrator | Monday 29 September 2025 06:08:48 +0000 (0:00:00.208) 0:00:18.806 ****** 2025-09-29 06:09:34.457694 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:34.457701 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:34.457708 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:34.457715 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:34.457753 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:34.457763 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:34.457770 | orchestrator | 2025-09-29 06:09:34.457777 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-29 06:09:34.457784 | orchestrator | Monday 29 September 2025 06:08:57 +0000 (0:00:09.176) 0:00:27.982 ****** 2025-09-29 06:09:34.457791 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:34.457799 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:34.457806 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:34.457813 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:34.457820 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:34.457827 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:34.457834 | orchestrator | 2025-09-29 06:09:34.457841 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-29 06:09:34.457848 | orchestrator | Monday 29 September 2025 06:08:59 +0000 (0:00:01.418) 0:00:29.401 ****** 2025-09-29 06:09:34.457855 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:34.457862 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:34.457869 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:34.457876 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:34.457883 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:34.457890 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:34.457898 | orchestrator | 2025-09-29 06:09:34.457910 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-29 06:09:34.457923 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:09.369) 0:00:38.770 ****** 2025-09-29 06:09:34.457935 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-29 06:09:34.457949 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-29 06:09:34.457961 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-29 06:09:34.457973 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-29 06:09:34.457985 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-29 06:09:34.458000 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-29 06:09:34.458012 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-29 06:09:34.458079 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-29 06:09:34.458087 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-29 06:09:34.458094 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-29 06:09:34.458101 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-29 06:09:34.458108 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-29 06:09:34.458115 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458123 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458145 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458173 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458187 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458200 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-29 06:09:34.458213 | orchestrator | 2025-09-29 06:09:34.458226 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-29 06:09:34.458239 | orchestrator | Monday 29 September 2025 06:09:16 +0000 (0:00:07.492) 0:00:46.262 ****** 2025-09-29 06:09:34.458252 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-29 06:09:34.458264 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:34.458276 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-29 06:09:34.458284 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:34.458291 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-29 06:09:34.458298 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:34.458305 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-29 06:09:34.458313 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-29 06:09:34.458320 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-29 06:09:34.458327 | orchestrator | 2025-09-29 06:09:34.458334 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-29 06:09:34.458341 | orchestrator | Monday 29 September 2025 06:09:18 +0000 (0:00:02.496) 0:00:48.759 ****** 2025-09-29 06:09:34.458354 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-29 06:09:34.458382 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:34.458395 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-29 06:09:34.458407 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:34.458420 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-29 06:09:34.458428 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:34.458435 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-29 06:09:34.458442 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-29 06:09:34.458449 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-29 06:09:34.458456 | orchestrator | 2025-09-29 06:09:34.458463 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-29 06:09:34.458470 | orchestrator | Monday 29 September 2025 06:09:23 +0000 (0:00:04.626) 0:00:53.386 ****** 2025-09-29 06:09:34.458477 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:34.458484 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:34.458491 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:34.458498 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:34.458505 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:34.458512 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:34.458519 | orchestrator | 2025-09-29 06:09:34.458526 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:09:34.458534 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:09:34.458542 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:09:34.458550 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:09:34.458557 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:09:34.458570 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:09:34.458584 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:09:34.458592 | orchestrator | 2025-09-29 06:09:34.458599 | orchestrator | 2025-09-29 06:09:34.458611 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:09:34.458618 | orchestrator | Monday 29 September 2025 06:09:32 +0000 (0:00:09.410) 0:01:02.797 ****** 2025-09-29 06:09:34.458625 | orchestrator | =============================================================================== 2025-09-29 06:09:34.458633 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.78s 2025-09-29 06:09:34.458640 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.18s 2025-09-29 06:09:34.458647 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.49s 2025-09-29 06:09:34.458654 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.63s 2025-09-29 06:09:34.458661 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.04s 2025-09-29 06:09:34.458668 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.50s 2025-09-29 06:09:34.458675 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.25s 2025-09-29 06:09:34.458682 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.01s 2025-09-29 06:09:34.458689 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.86s 2025-09-29 06:09:34.458696 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.65s 2025-09-29 06:09:34.458703 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.55s 2025-09-29 06:09:34.458710 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.42s 2025-09-29 06:09:34.458717 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.42s 2025-09-29 06:09:34.458804 | orchestrator | module-load : Load modules ---------------------------------------------- 1.33s 2025-09-29 06:09:34.458814 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.15s 2025-09-29 06:09:34.458821 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2025-09-29 06:09:34.458829 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.76s 2025-09-29 06:09:34.458836 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-09-29 06:09:34.458843 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state STARTED 2025-09-29 06:09:34.458979 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:34.460480 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:34.462810 | orchestrator | 2025-09-29 06:09:34 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:34.462887 | orchestrator | 2025-09-29 06:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:37.492098 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:37.492819 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task c1bfad82-9cd5-4e0f-99b2-02cef0c43036 is in state SUCCESS 2025-09-29 06:09:37.496117 | orchestrator | 2025-09-29 06:09:37.496189 | orchestrator | 2025-09-29 06:09:37.496212 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-29 06:09:37.496232 | orchestrator | 2025-09-29 06:09:37.496250 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-29 06:09:37.496304 | orchestrator | Monday 29 September 2025 06:06:08 +0000 (0:00:00.234) 0:00:00.234 ****** 2025-09-29 06:09:37.496324 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.496344 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.496363 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.496380 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.496399 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.496417 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.496432 | orchestrator | 2025-09-29 06:09:37.496443 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-29 06:09:37.496454 | orchestrator | Monday 29 September 2025 06:06:09 +0000 (0:00:00.758) 0:00:00.992 ****** 2025-09-29 06:09:37.496492 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.496512 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.496539 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.496559 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.496576 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.496594 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.496612 | orchestrator | 2025-09-29 06:09:37.496630 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-29 06:09:37.496647 | orchestrator | Monday 29 September 2025 06:06:10 +0000 (0:00:00.893) 0:00:01.886 ****** 2025-09-29 06:09:37.496664 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.496682 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.496700 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.496720 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.496771 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.496793 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.496811 | orchestrator | 2025-09-29 06:09:37.496827 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-29 06:09:37.496840 | orchestrator | Monday 29 September 2025 06:06:11 +0000 (0:00:00.734) 0:00:02.620 ****** 2025-09-29 06:09:37.496853 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.496863 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.496874 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.496885 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.496896 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.496905 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.496914 | orchestrator | 2025-09-29 06:09:37.496941 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-29 06:09:37.496952 | orchestrator | Monday 29 September 2025 06:06:13 +0000 (0:00:02.166) 0:00:04.787 ****** 2025-09-29 06:09:37.496961 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.496971 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.496980 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.496990 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.496999 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.497008 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.497018 | orchestrator | 2025-09-29 06:09:37.497028 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-29 06:09:37.497037 | orchestrator | Monday 29 September 2025 06:06:14 +0000 (0:00:01.105) 0:00:05.893 ****** 2025-09-29 06:09:37.497046 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.497056 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.497065 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.497075 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.497084 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.497094 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.497106 | orchestrator | 2025-09-29 06:09:37.497121 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-29 06:09:37.497138 | orchestrator | Monday 29 September 2025 06:06:15 +0000 (0:00:01.250) 0:00:07.144 ****** 2025-09-29 06:09:37.497160 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.497181 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.497211 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.497225 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.497241 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.497257 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.497273 | orchestrator | 2025-09-29 06:09:37.497289 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-29 06:09:37.497304 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:00.604) 0:00:07.748 ****** 2025-09-29 06:09:37.497319 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.497334 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.497349 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.497364 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.497379 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.497394 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.497409 | orchestrator | 2025-09-29 06:09:37.497424 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-29 06:09:37.497440 | orchestrator | Monday 29 September 2025 06:06:17 +0000 (0:00:00.679) 0:00:08.428 ****** 2025-09-29 06:09:37.497455 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497472 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497487 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.497503 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497518 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497533 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.497548 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497564 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497579 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.497595 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497632 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497650 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.497665 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497683 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497698 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.497714 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:09:37.497758 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:09:37.497774 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.497791 | orchestrator | 2025-09-29 06:09:37.497807 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-29 06:09:37.497823 | orchestrator | Monday 29 September 2025 06:06:17 +0000 (0:00:00.744) 0:00:09.172 ****** 2025-09-29 06:09:37.497840 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.497859 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.497876 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.497894 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.497912 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.497930 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.497948 | orchestrator | 2025-09-29 06:09:37.497965 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-29 06:09:37.497985 | orchestrator | Monday 29 September 2025 06:06:19 +0000 (0:00:01.135) 0:00:10.308 ****** 2025-09-29 06:09:37.498004 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.498115 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.498141 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.498160 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.498200 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.498220 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.498238 | orchestrator | 2025-09-29 06:09:37.498257 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-29 06:09:37.498275 | orchestrator | Monday 29 September 2025 06:06:19 +0000 (0:00:00.800) 0:00:11.109 ****** 2025-09-29 06:09:37.498294 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.498312 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.498330 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.498348 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.498367 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.498385 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.498412 | orchestrator | 2025-09-29 06:09:37.498430 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-29 06:09:37.498448 | orchestrator | Monday 29 September 2025 06:06:25 +0000 (0:00:05.321) 0:00:16.430 ****** 2025-09-29 06:09:37.498466 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.498485 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.498503 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.498522 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.498540 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.498559 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.498577 | orchestrator | 2025-09-29 06:09:37.498596 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-29 06:09:37.498613 | orchestrator | Monday 29 September 2025 06:06:26 +0000 (0:00:00.956) 0:00:17.386 ****** 2025-09-29 06:09:37.498631 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.498647 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.498665 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.498682 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.498698 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.498715 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.498770 | orchestrator | 2025-09-29 06:09:37.498788 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-29 06:09:37.498806 | orchestrator | Monday 29 September 2025 06:06:28 +0000 (0:00:02.214) 0:00:19.601 ****** 2025-09-29 06:09:37.498822 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.498837 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.498854 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.498869 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.498885 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.498900 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.498916 | orchestrator | 2025-09-29 06:09:37.498932 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-29 06:09:37.498948 | orchestrator | Monday 29 September 2025 06:06:30 +0000 (0:00:01.739) 0:00:21.341 ****** 2025-09-29 06:09:37.498964 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-29 06:09:37.498980 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-29 06:09:37.498996 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-29 06:09:37.499012 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-29 06:09:37.499028 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-29 06:09:37.499044 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-29 06:09:37.499061 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-29 06:09:37.499077 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-29 06:09:37.499092 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-29 06:09:37.499109 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-29 06:09:37.499125 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-29 06:09:37.499141 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-29 06:09:37.499157 | orchestrator | 2025-09-29 06:09:37.499173 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-29 06:09:37.499201 | orchestrator | Monday 29 September 2025 06:06:32 +0000 (0:00:02.744) 0:00:24.086 ****** 2025-09-29 06:09:37.499217 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.499234 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.499249 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.499265 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.499281 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.499296 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.499312 | orchestrator | 2025-09-29 06:09:37.499344 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-29 06:09:37.499361 | orchestrator | 2025-09-29 06:09:37.499377 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-29 06:09:37.499392 | orchestrator | Monday 29 September 2025 06:06:34 +0000 (0:00:01.752) 0:00:25.838 ****** 2025-09-29 06:09:37.499408 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.499424 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.499440 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.499457 | orchestrator | 2025-09-29 06:09:37.499473 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-29 06:09:37.499488 | orchestrator | Monday 29 September 2025 06:06:35 +0000 (0:00:01.046) 0:00:26.885 ****** 2025-09-29 06:09:37.499505 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.499521 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.499537 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.499554 | orchestrator | 2025-09-29 06:09:37.499571 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-29 06:09:37.499588 | orchestrator | Monday 29 September 2025 06:06:36 +0000 (0:00:00.947) 0:00:27.833 ****** 2025-09-29 06:09:37.499604 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.499620 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.499636 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.499651 | orchestrator | 2025-09-29 06:09:37.499668 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-29 06:09:37.499683 | orchestrator | Monday 29 September 2025 06:06:37 +0000 (0:00:00.946) 0:00:28.779 ****** 2025-09-29 06:09:37.499700 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.499792 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.499816 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.499833 | orchestrator | 2025-09-29 06:09:37.499849 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-29 06:09:37.499866 | orchestrator | Monday 29 September 2025 06:06:39 +0000 (0:00:01.556) 0:00:30.336 ****** 2025-09-29 06:09:37.499883 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.499900 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.499916 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.499931 | orchestrator | 2025-09-29 06:09:37.499947 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-29 06:09:37.499964 | orchestrator | Monday 29 September 2025 06:06:39 +0000 (0:00:00.331) 0:00:30.668 ****** 2025-09-29 06:09:37.499981 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.499999 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.500014 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.500031 | orchestrator | 2025-09-29 06:09:37.500055 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-29 06:09:37.500069 | orchestrator | Monday 29 September 2025 06:06:39 +0000 (0:00:00.574) 0:00:31.242 ****** 2025-09-29 06:09:37.500081 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.500094 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.500107 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.500120 | orchestrator | 2025-09-29 06:09:37.500151 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-29 06:09:37.500167 | orchestrator | Monday 29 September 2025 06:06:41 +0000 (0:00:01.782) 0:00:33.025 ****** 2025-09-29 06:09:37.500182 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:09:37.500208 | orchestrator | 2025-09-29 06:09:37.500222 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-29 06:09:37.500236 | orchestrator | Monday 29 September 2025 06:06:42 +0000 (0:00:00.631) 0:00:33.656 ****** 2025-09-29 06:09:37.500249 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.500263 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.500276 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.500289 | orchestrator | 2025-09-29 06:09:37.500303 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-29 06:09:37.500316 | orchestrator | Monday 29 September 2025 06:06:44 +0000 (0:00:01.663) 0:00:35.319 ****** 2025-09-29 06:09:37.500329 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.500343 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.500357 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.500370 | orchestrator | 2025-09-29 06:09:37.500383 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-29 06:09:37.500396 | orchestrator | Monday 29 September 2025 06:06:44 +0000 (0:00:00.836) 0:00:36.156 ****** 2025-09-29 06:09:37.500410 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.500422 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.500435 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.500448 | orchestrator | 2025-09-29 06:09:37.500463 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-29 06:09:37.500476 | orchestrator | Monday 29 September 2025 06:06:45 +0000 (0:00:01.031) 0:00:37.188 ****** 2025-09-29 06:09:37.500489 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.500503 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.500513 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.500521 | orchestrator | 2025-09-29 06:09:37.500528 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-29 06:09:37.500538 | orchestrator | Monday 29 September 2025 06:06:47 +0000 (0:00:01.617) 0:00:38.806 ****** 2025-09-29 06:09:37.500552 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.500560 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.500568 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.500576 | orchestrator | 2025-09-29 06:09:37.500584 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-29 06:09:37.500591 | orchestrator | Monday 29 September 2025 06:06:47 +0000 (0:00:00.453) 0:00:39.260 ****** 2025-09-29 06:09:37.500599 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.500607 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.500619 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.500632 | orchestrator | 2025-09-29 06:09:37.500645 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-29 06:09:37.500658 | orchestrator | Monday 29 September 2025 06:06:48 +0000 (0:00:00.558) 0:00:39.818 ****** 2025-09-29 06:09:37.500670 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.500683 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.500697 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.500711 | orchestrator | 2025-09-29 06:09:37.500763 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-29 06:09:37.500780 | orchestrator | Monday 29 September 2025 06:06:51 +0000 (0:00:02.780) 0:00:42.599 ****** 2025-09-29 06:09:37.500796 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-29 06:09:37.500810 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-29 06:09:37.500824 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-29 06:09:37.500837 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-29 06:09:37.500865 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-29 06:09:37.500876 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-29 06:09:37.500890 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-29 06:09:37.500898 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-29 06:09:37.500906 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-29 06:09:37.500914 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-29 06:09:37.500929 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-29 06:09:37.500937 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-29 06:09:37.500948 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.500961 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.500975 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.500988 | orchestrator | 2025-09-29 06:09:37.501002 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-29 06:09:37.501015 | orchestrator | Monday 29 September 2025 06:07:35 +0000 (0:00:44.560) 0:01:27.160 ****** 2025-09-29 06:09:37.501029 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.501043 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.501056 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.501069 | orchestrator | 2025-09-29 06:09:37.501083 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-29 06:09:37.501096 | orchestrator | Monday 29 September 2025 06:07:36 +0000 (0:00:00.289) 0:01:27.449 ****** 2025-09-29 06:09:37.501109 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501121 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501133 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501146 | orchestrator | 2025-09-29 06:09:37.501160 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-29 06:09:37.501174 | orchestrator | Monday 29 September 2025 06:07:37 +0000 (0:00:00.987) 0:01:28.437 ****** 2025-09-29 06:09:37.501187 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501200 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501214 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501226 | orchestrator | 2025-09-29 06:09:37.501240 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-29 06:09:37.501253 | orchestrator | Monday 29 September 2025 06:07:38 +0000 (0:00:01.372) 0:01:29.809 ****** 2025-09-29 06:09:37.501266 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501280 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501292 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501306 | orchestrator | 2025-09-29 06:09:37.501319 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-29 06:09:37.501332 | orchestrator | Monday 29 September 2025 06:08:03 +0000 (0:00:25.079) 0:01:54.889 ****** 2025-09-29 06:09:37.501345 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.501359 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.501372 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.501385 | orchestrator | 2025-09-29 06:09:37.501398 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-29 06:09:37.501412 | orchestrator | Monday 29 September 2025 06:08:04 +0000 (0:00:00.697) 0:01:55.586 ****** 2025-09-29 06:09:37.501436 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.501449 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.501462 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.501475 | orchestrator | 2025-09-29 06:09:37.501489 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-29 06:09:37.501502 | orchestrator | Monday 29 September 2025 06:08:04 +0000 (0:00:00.667) 0:01:56.254 ****** 2025-09-29 06:09:37.501515 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501528 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501541 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501554 | orchestrator | 2025-09-29 06:09:37.501567 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-29 06:09:37.501581 | orchestrator | Monday 29 September 2025 06:08:05 +0000 (0:00:00.627) 0:01:56.881 ****** 2025-09-29 06:09:37.501594 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.501617 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.501631 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.501644 | orchestrator | 2025-09-29 06:09:37.501657 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-29 06:09:37.501670 | orchestrator | Monday 29 September 2025 06:08:06 +0000 (0:00:00.851) 0:01:57.733 ****** 2025-09-29 06:09:37.501684 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.501697 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.501712 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.501779 | orchestrator | 2025-09-29 06:09:37.501821 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-29 06:09:37.501837 | orchestrator | Monday 29 September 2025 06:08:06 +0000 (0:00:00.313) 0:01:58.047 ****** 2025-09-29 06:09:37.501850 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501863 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501876 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501889 | orchestrator | 2025-09-29 06:09:37.501902 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-29 06:09:37.501916 | orchestrator | Monday 29 September 2025 06:08:07 +0000 (0:00:00.669) 0:01:58.716 ****** 2025-09-29 06:09:37.501929 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.501942 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.501955 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.501969 | orchestrator | 2025-09-29 06:09:37.501982 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-29 06:09:37.501996 | orchestrator | Monday 29 September 2025 06:08:08 +0000 (0:00:00.679) 0:01:59.396 ****** 2025-09-29 06:09:37.502008 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.502060 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.502074 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.502088 | orchestrator | 2025-09-29 06:09:37.502101 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-29 06:09:37.502115 | orchestrator | Monday 29 September 2025 06:08:09 +0000 (0:00:01.105) 0:02:00.501 ****** 2025-09-29 06:09:37.502129 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:09:37.502143 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:09:37.502157 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:09:37.502172 | orchestrator | 2025-09-29 06:09:37.502186 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-29 06:09:37.502200 | orchestrator | Monday 29 September 2025 06:08:10 +0000 (0:00:00.841) 0:02:01.342 ****** 2025-09-29 06:09:37.502214 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.502238 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.502252 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.502263 | orchestrator | 2025-09-29 06:09:37.502275 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-29 06:09:37.502286 | orchestrator | Monday 29 September 2025 06:08:10 +0000 (0:00:00.297) 0:02:01.640 ****** 2025-09-29 06:09:37.502297 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.502317 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.502328 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.502339 | orchestrator | 2025-09-29 06:09:37.502350 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-29 06:09:37.502361 | orchestrator | Monday 29 September 2025 06:08:10 +0000 (0:00:00.412) 0:02:02.053 ****** 2025-09-29 06:09:37.502373 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.502384 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.502396 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.502407 | orchestrator | 2025-09-29 06:09:37.502418 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-29 06:09:37.502429 | orchestrator | Monday 29 September 2025 06:08:11 +0000 (0:00:00.968) 0:02:03.021 ****** 2025-09-29 06:09:37.502440 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.502452 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.502462 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.502473 | orchestrator | 2025-09-29 06:09:37.502484 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-29 06:09:37.502495 | orchestrator | Monday 29 September 2025 06:08:12 +0000 (0:00:00.575) 0:02:03.597 ****** 2025-09-29 06:09:37.502507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-29 06:09:37.502518 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-29 06:09:37.502530 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-29 06:09:37.502541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-29 06:09:37.502552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-29 06:09:37.502563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-29 06:09:37.502574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-29 06:09:37.502585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-29 06:09:37.502596 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-29 06:09:37.502607 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-29 06:09:37.502618 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-29 06:09:37.502628 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-29 06:09:37.502640 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-29 06:09:37.502663 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-29 06:09:37.502676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-29 06:09:37.502687 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-29 06:09:37.502698 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-29 06:09:37.502710 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-29 06:09:37.502737 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-29 06:09:37.502748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-29 06:09:37.502759 | orchestrator | 2025-09-29 06:09:37.502770 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-29 06:09:37.502781 | orchestrator | 2025-09-29 06:09:37.502793 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-29 06:09:37.502814 | orchestrator | Monday 29 September 2025 06:08:15 +0000 (0:00:03.192) 0:02:06.789 ****** 2025-09-29 06:09:37.502825 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.502836 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.502846 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.502858 | orchestrator | 2025-09-29 06:09:37.502869 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-29 06:09:37.502881 | orchestrator | Monday 29 September 2025 06:08:15 +0000 (0:00:00.404) 0:02:07.193 ****** 2025-09-29 06:09:37.502892 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.502903 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.502914 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.502925 | orchestrator | 2025-09-29 06:09:37.502936 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-29 06:09:37.502947 | orchestrator | Monday 29 September 2025 06:08:16 +0000 (0:00:00.656) 0:02:07.850 ****** 2025-09-29 06:09:37.502957 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.502969 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.502981 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.502992 | orchestrator | 2025-09-29 06:09:37.503003 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-29 06:09:37.503014 | orchestrator | Monday 29 September 2025 06:08:16 +0000 (0:00:00.349) 0:02:08.199 ****** 2025-09-29 06:09:37.503025 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:09:37.503037 | orchestrator | 2025-09-29 06:09:37.503048 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-29 06:09:37.503058 | orchestrator | Monday 29 September 2025 06:08:17 +0000 (0:00:00.731) 0:02:08.931 ****** 2025-09-29 06:09:37.503069 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.503081 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.503092 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.503103 | orchestrator | 2025-09-29 06:09:37.503113 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-29 06:09:37.503125 | orchestrator | Monday 29 September 2025 06:08:17 +0000 (0:00:00.332) 0:02:09.263 ****** 2025-09-29 06:09:37.503135 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.503146 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.503157 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.503168 | orchestrator | 2025-09-29 06:09:37.503179 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-29 06:09:37.503191 | orchestrator | Monday 29 September 2025 06:08:18 +0000 (0:00:00.290) 0:02:09.554 ****** 2025-09-29 06:09:37.503202 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.503213 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.503224 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.503235 | orchestrator | 2025-09-29 06:09:37.503246 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-29 06:09:37.503258 | orchestrator | Monday 29 September 2025 06:08:18 +0000 (0:00:00.280) 0:02:09.835 ****** 2025-09-29 06:09:37.503270 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.503282 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.503294 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.503306 | orchestrator | 2025-09-29 06:09:37.503318 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-29 06:09:37.503331 | orchestrator | Monday 29 September 2025 06:08:19 +0000 (0:00:00.810) 0:02:10.645 ****** 2025-09-29 06:09:37.503342 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.503354 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.503366 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.503378 | orchestrator | 2025-09-29 06:09:37.503391 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-29 06:09:37.503403 | orchestrator | Monday 29 September 2025 06:08:20 +0000 (0:00:01.089) 0:02:11.735 ****** 2025-09-29 06:09:37.503432 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.503445 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.503456 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.503468 | orchestrator | 2025-09-29 06:09:37.503479 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-29 06:09:37.503491 | orchestrator | Monday 29 September 2025 06:08:21 +0000 (0:00:01.209) 0:02:12.945 ****** 2025-09-29 06:09:37.503502 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:09:37.503513 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:09:37.503524 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:09:37.503534 | orchestrator | 2025-09-29 06:09:37.503545 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-29 06:09:37.503556 | orchestrator | 2025-09-29 06:09:37.503567 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-29 06:09:37.503578 | orchestrator | Monday 29 September 2025 06:08:34 +0000 (0:00:12.705) 0:02:25.650 ****** 2025-09-29 06:09:37.503589 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.503601 | orchestrator | 2025-09-29 06:09:37.503614 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-29 06:09:37.503626 | orchestrator | Monday 29 September 2025 06:08:35 +0000 (0:00:00.790) 0:02:26.441 ****** 2025-09-29 06:09:37.503647 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.503660 | orchestrator | 2025-09-29 06:09:37.504204 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-29 06:09:37.504232 | orchestrator | Monday 29 September 2025 06:08:35 +0000 (0:00:00.380) 0:02:26.821 ****** 2025-09-29 06:09:37.504240 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-29 06:09:37.504246 | orchestrator | 2025-09-29 06:09:37.504253 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-29 06:09:37.504260 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:00.530) 0:02:27.351 ****** 2025-09-29 06:09:37.504267 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504276 | orchestrator | 2025-09-29 06:09:37.504287 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-29 06:09:37.504295 | orchestrator | Monday 29 September 2025 06:08:37 +0000 (0:00:01.292) 0:02:28.644 ****** 2025-09-29 06:09:37.504302 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504309 | orchestrator | 2025-09-29 06:09:37.504315 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-29 06:09:37.504322 | orchestrator | Monday 29 September 2025 06:08:37 +0000 (0:00:00.603) 0:02:29.248 ****** 2025-09-29 06:09:37.504329 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-29 06:09:37.504335 | orchestrator | 2025-09-29 06:09:37.504341 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-29 06:09:37.504348 | orchestrator | Monday 29 September 2025 06:08:39 +0000 (0:00:01.461) 0:02:30.709 ****** 2025-09-29 06:09:37.504354 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-29 06:09:37.504360 | orchestrator | 2025-09-29 06:09:37.504366 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-29 06:09:37.504372 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:00.733) 0:02:31.442 ****** 2025-09-29 06:09:37.504378 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504384 | orchestrator | 2025-09-29 06:09:37.504390 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-29 06:09:37.504396 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:00.393) 0:02:31.836 ****** 2025-09-29 06:09:37.504402 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504408 | orchestrator | 2025-09-29 06:09:37.504414 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-29 06:09:37.504420 | orchestrator | 2025-09-29 06:09:37.504426 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-29 06:09:37.504432 | orchestrator | Monday 29 September 2025 06:08:41 +0000 (0:00:00.703) 0:02:32.539 ****** 2025-09-29 06:09:37.504446 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.504452 | orchestrator | 2025-09-29 06:09:37.504458 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-29 06:09:37.504464 | orchestrator | Monday 29 September 2025 06:08:41 +0000 (0:00:00.139) 0:02:32.679 ****** 2025-09-29 06:09:37.504470 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 06:09:37.504476 | orchestrator | 2025-09-29 06:09:37.504482 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-29 06:09:37.504488 | orchestrator | Monday 29 September 2025 06:08:41 +0000 (0:00:00.243) 0:02:32.923 ****** 2025-09-29 06:09:37.504494 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.504500 | orchestrator | 2025-09-29 06:09:37.504506 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-29 06:09:37.504512 | orchestrator | Monday 29 September 2025 06:08:42 +0000 (0:00:00.981) 0:02:33.904 ****** 2025-09-29 06:09:37.504518 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.504524 | orchestrator | 2025-09-29 06:09:37.504530 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-29 06:09:37.504536 | orchestrator | Monday 29 September 2025 06:08:44 +0000 (0:00:01.482) 0:02:35.387 ****** 2025-09-29 06:09:37.504542 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504548 | orchestrator | 2025-09-29 06:09:37.504554 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-29 06:09:37.504560 | orchestrator | Monday 29 September 2025 06:08:44 +0000 (0:00:00.814) 0:02:36.202 ****** 2025-09-29 06:09:37.504566 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.504572 | orchestrator | 2025-09-29 06:09:37.504579 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-29 06:09:37.504585 | orchestrator | Monday 29 September 2025 06:08:45 +0000 (0:00:00.559) 0:02:36.761 ****** 2025-09-29 06:09:37.504591 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504597 | orchestrator | 2025-09-29 06:09:37.504603 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-29 06:09:37.504609 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:07.981) 0:02:44.743 ****** 2025-09-29 06:09:37.504615 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.504621 | orchestrator | 2025-09-29 06:09:37.504627 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-29 06:09:37.504632 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:12.706) 0:02:57.449 ****** 2025-09-29 06:09:37.504644 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.504650 | orchestrator | 2025-09-29 06:09:37.504656 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-29 06:09:37.504662 | orchestrator | 2025-09-29 06:09:37.504668 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-29 06:09:37.504674 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:00.503) 0:02:57.953 ****** 2025-09-29 06:09:37.504680 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.504686 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.504692 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.504698 | orchestrator | 2025-09-29 06:09:37.504704 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-29 06:09:37.504710 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:00.264) 0:02:58.217 ****** 2025-09-29 06:09:37.504716 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504778 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.504787 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.504793 | orchestrator | 2025-09-29 06:09:37.504808 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-29 06:09:37.504815 | orchestrator | Monday 29 September 2025 06:09:07 +0000 (0:00:00.279) 0:02:58.497 ****** 2025-09-29 06:09:37.504821 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:09:37.504827 | orchestrator | 2025-09-29 06:09:37.504838 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-29 06:09:37.504844 | orchestrator | Monday 29 September 2025 06:09:07 +0000 (0:00:00.548) 0:02:59.046 ****** 2025-09-29 06:09:37.504850 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504856 | orchestrator | 2025-09-29 06:09:37.504863 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-29 06:09:37.504869 | orchestrator | Monday 29 September 2025 06:09:07 +0000 (0:00:00.153) 0:02:59.199 ****** 2025-09-29 06:09:37.504875 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504881 | orchestrator | 2025-09-29 06:09:37.504887 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-29 06:09:37.504893 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.175) 0:02:59.375 ****** 2025-09-29 06:09:37.504899 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504905 | orchestrator | 2025-09-29 06:09:37.504911 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-29 06:09:37.504917 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.210) 0:02:59.585 ****** 2025-09-29 06:09:37.504923 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504929 | orchestrator | 2025-09-29 06:09:37.504935 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-29 06:09:37.504942 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.188) 0:02:59.774 ****** 2025-09-29 06:09:37.504948 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504954 | orchestrator | 2025-09-29 06:09:37.504960 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-29 06:09:37.504966 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.254) 0:03:00.028 ****** 2025-09-29 06:09:37.504972 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.504978 | orchestrator | 2025-09-29 06:09:37.504985 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-29 06:09:37.504996 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.163) 0:03:00.192 ****** 2025-09-29 06:09:37.505006 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505017 | orchestrator | 2025-09-29 06:09:37.505029 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-29 06:09:37.505039 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:00.191) 0:03:00.383 ****** 2025-09-29 06:09:37.505050 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505061 | orchestrator | 2025-09-29 06:09:37.505071 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-29 06:09:37.505082 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:00.179) 0:03:00.563 ****** 2025-09-29 06:09:37.505088 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505094 | orchestrator | 2025-09-29 06:09:37.505101 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-29 06:09:37.505112 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:00.600) 0:03:01.163 ****** 2025-09-29 06:09:37.505123 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-29 06:09:37.505135 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-29 06:09:37.505142 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505148 | orchestrator | 2025-09-29 06:09:37.505154 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-29 06:09:37.505160 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:00.278) 0:03:01.441 ****** 2025-09-29 06:09:37.505166 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505172 | orchestrator | 2025-09-29 06:09:37.505179 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-29 06:09:37.505185 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:00.225) 0:03:01.667 ****** 2025-09-29 06:09:37.505192 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505202 | orchestrator | 2025-09-29 06:09:37.505212 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-29 06:09:37.505227 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:00.345) 0:03:02.013 ****** 2025-09-29 06:09:37.505237 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505247 | orchestrator | 2025-09-29 06:09:37.505258 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-29 06:09:37.505269 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:00.187) 0:03:02.200 ****** 2025-09-29 06:09:37.505278 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505289 | orchestrator | 2025-09-29 06:09:37.505299 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-29 06:09:37.505310 | orchestrator | Monday 29 September 2025 06:09:11 +0000 (0:00:00.327) 0:03:02.528 ****** 2025-09-29 06:09:37.505317 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505323 | orchestrator | 2025-09-29 06:09:37.505339 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-29 06:09:37.505350 | orchestrator | Monday 29 September 2025 06:09:11 +0000 (0:00:00.221) 0:03:02.749 ****** 2025-09-29 06:09:37.505360 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505372 | orchestrator | 2025-09-29 06:09:37.505378 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-29 06:09:37.505384 | orchestrator | Monday 29 September 2025 06:09:11 +0000 (0:00:00.176) 0:03:02.925 ****** 2025-09-29 06:09:37.505390 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505396 | orchestrator | 2025-09-29 06:09:37.505402 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-29 06:09:37.505408 | orchestrator | Monday 29 September 2025 06:09:11 +0000 (0:00:00.172) 0:03:03.098 ****** 2025-09-29 06:09:37.505418 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505427 | orchestrator | 2025-09-29 06:09:37.505438 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-29 06:09:37.505457 | orchestrator | Monday 29 September 2025 06:09:12 +0000 (0:00:00.169) 0:03:03.267 ****** 2025-09-29 06:09:37.505464 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505470 | orchestrator | 2025-09-29 06:09:37.505476 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-29 06:09:37.505483 | orchestrator | Monday 29 September 2025 06:09:12 +0000 (0:00:00.170) 0:03:03.438 ****** 2025-09-29 06:09:37.505489 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505495 | orchestrator | 2025-09-29 06:09:37.505501 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-29 06:09:37.505507 | orchestrator | Monday 29 September 2025 06:09:12 +0000 (0:00:00.168) 0:03:03.606 ****** 2025-09-29 06:09:37.505516 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505527 | orchestrator | 2025-09-29 06:09:37.505538 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-29 06:09:37.505548 | orchestrator | Monday 29 September 2025 06:09:12 +0000 (0:00:00.537) 0:03:04.144 ****** 2025-09-29 06:09:37.505556 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-29 06:09:37.505563 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-29 06:09:37.505569 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-29 06:09:37.505575 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-29 06:09:37.505581 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505587 | orchestrator | 2025-09-29 06:09:37.505593 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-29 06:09:37.505599 | orchestrator | Monday 29 September 2025 06:09:13 +0000 (0:00:00.417) 0:03:04.561 ****** 2025-09-29 06:09:37.505605 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505611 | orchestrator | 2025-09-29 06:09:37.505618 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-29 06:09:37.505623 | orchestrator | Monday 29 September 2025 06:09:13 +0000 (0:00:00.263) 0:03:04.825 ****** 2025-09-29 06:09:37.505629 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505636 | orchestrator | 2025-09-29 06:09:37.505647 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-29 06:09:37.505653 | orchestrator | Monday 29 September 2025 06:09:13 +0000 (0:00:00.377) 0:03:05.203 ****** 2025-09-29 06:09:37.505659 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505665 | orchestrator | 2025-09-29 06:09:37.505671 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-29 06:09:37.505677 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:00.198) 0:03:05.401 ****** 2025-09-29 06:09:37.505683 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505689 | orchestrator | 2025-09-29 06:09:37.505695 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-29 06:09:37.505701 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:00.179) 0:03:05.581 ****** 2025-09-29 06:09:37.505708 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-29 06:09:37.505714 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-29 06:09:37.505720 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505742 | orchestrator | 2025-09-29 06:09:37.505749 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-29 06:09:37.505755 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:00.275) 0:03:05.856 ****** 2025-09-29 06:09:37.505761 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.505767 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.505773 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.505779 | orchestrator | 2025-09-29 06:09:37.505785 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-29 06:09:37.505791 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:00.268) 0:03:06.125 ****** 2025-09-29 06:09:37.505797 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.505803 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.505810 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.505816 | orchestrator | 2025-09-29 06:09:37.505822 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-29 06:09:37.505828 | orchestrator | 2025-09-29 06:09:37.505834 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-29 06:09:37.505840 | orchestrator | Monday 29 September 2025 06:09:15 +0000 (0:00:01.006) 0:03:07.132 ****** 2025-09-29 06:09:37.505846 | orchestrator | ok: [testbed-manager] 2025-09-29 06:09:37.505853 | orchestrator | 2025-09-29 06:09:37.505859 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-29 06:09:37.505865 | orchestrator | Monday 29 September 2025 06:09:15 +0000 (0:00:00.120) 0:03:07.252 ****** 2025-09-29 06:09:37.505871 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-29 06:09:37.505877 | orchestrator | 2025-09-29 06:09:37.505883 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-29 06:09:37.505889 | orchestrator | Monday 29 September 2025 06:09:16 +0000 (0:00:00.206) 0:03:07.459 ****** 2025-09-29 06:09:37.505899 | orchestrator | changed: [testbed-manager] 2025-09-29 06:09:37.505906 | orchestrator | 2025-09-29 06:09:37.505912 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-29 06:09:37.505918 | orchestrator | 2025-09-29 06:09:37.505928 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-29 06:09:37.505939 | orchestrator | Monday 29 September 2025 06:09:21 +0000 (0:00:05.304) 0:03:12.764 ****** 2025-09-29 06:09:37.505950 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:09:37.505961 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:09:37.505972 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:09:37.505983 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:09:37.505994 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:09:37.506005 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:09:37.506243 | orchestrator | 2025-09-29 06:09:37.506266 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-29 06:09:37.506287 | orchestrator | Monday 29 September 2025 06:09:22 +0000 (0:00:01.009) 0:03:13.773 ****** 2025-09-29 06:09:37.506302 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-29 06:09:37.506313 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-29 06:09:37.506324 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-29 06:09:37.506335 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-29 06:09:37.506345 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-29 06:09:37.506355 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-29 06:09:37.506366 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-29 06:09:37.506376 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-29 06:09:37.506385 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-29 06:09:37.506391 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-29 06:09:37.506397 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-29 06:09:37.506403 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-29 06:09:37.506411 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-29 06:09:37.506421 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-29 06:09:37.506432 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-29 06:09:37.506442 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-29 06:09:37.506452 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-29 06:09:37.506463 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-29 06:09:37.506473 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-29 06:09:37.506525 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-29 06:09:37.506537 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-29 06:09:37.506548 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-29 06:09:37.506558 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-29 06:09:37.506570 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-29 06:09:37.506581 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-29 06:09:37.506592 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-29 06:09:37.506603 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-29 06:09:37.506614 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-29 06:09:37.506623 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-29 06:09:37.506629 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-29 06:09:37.506636 | orchestrator | 2025-09-29 06:09:37.506642 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-29 06:09:37.506648 | orchestrator | Monday 29 September 2025 06:09:34 +0000 (0:00:12.232) 0:03:26.006 ****** 2025-09-29 06:09:37.506654 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.506661 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.506675 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.506681 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.506687 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.506693 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.506699 | orchestrator | 2025-09-29 06:09:37.506705 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-29 06:09:37.506711 | orchestrator | Monday 29 September 2025 06:09:35 +0000 (0:00:00.639) 0:03:26.646 ****** 2025-09-29 06:09:37.506717 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:09:37.506747 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:09:37.506754 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:09:37.506760 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:09:37.506771 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:09:37.506777 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:09:37.506783 | orchestrator | 2025-09-29 06:09:37.506789 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:09:37.506796 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:09:37.506804 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-29 06:09:37.506811 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-29 06:09:37.506823 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-29 06:09:37.506830 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-29 06:09:37.506836 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-29 06:09:37.506842 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-29 06:09:37.506848 | orchestrator | 2025-09-29 06:09:37.506854 | orchestrator | 2025-09-29 06:09:37.506861 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:09:37.506867 | orchestrator | Monday 29 September 2025 06:09:35 +0000 (0:00:00.454) 0:03:27.100 ****** 2025-09-29 06:09:37.506873 | orchestrator | =============================================================================== 2025-09-29 06:09:37.506880 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.56s 2025-09-29 06:09:37.506887 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.08s 2025-09-29 06:09:37.506893 | orchestrator | kubectl : Install required packages ------------------------------------ 12.71s 2025-09-29 06:09:37.506899 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.71s 2025-09-29 06:09:37.506905 | orchestrator | Manage labels ---------------------------------------------------------- 12.23s 2025-09-29 06:09:37.506911 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.98s 2025-09-29 06:09:37.506917 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.32s 2025-09-29 06:09:37.506923 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.30s 2025-09-29 06:09:37.506929 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.19s 2025-09-29 06:09:37.506936 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.78s 2025-09-29 06:09:37.506942 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.74s 2025-09-29 06:09:37.506948 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.21s 2025-09-29 06:09:37.506959 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.17s 2025-09-29 06:09:37.506965 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.78s 2025-09-29 06:09:37.506971 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.75s 2025-09-29 06:09:37.506978 | orchestrator | k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry --- 1.74s 2025-09-29 06:09:37.506984 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.66s 2025-09-29 06:09:37.506990 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.62s 2025-09-29 06:09:37.506998 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.56s 2025-09-29 06:09:37.507009 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.48s 2025-09-29 06:09:37.507018 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task bb41f95a-730f-4a0e-9647-24a8d5fb8b75 is in state STARTED 2025-09-29 06:09:37.507036 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:37.507047 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:37.507057 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task 69316f2e-090c-4b6c-ae1d-536dfe678b50 is in state STARTED 2025-09-29 06:09:37.507067 | orchestrator | 2025-09-29 06:09:37 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:37.507077 | orchestrator | 2025-09-29 06:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:40.533015 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:40.533231 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task bb41f95a-730f-4a0e-9647-24a8d5fb8b75 is in state STARTED 2025-09-29 06:09:40.536436 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:40.537080 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:40.537625 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task 69316f2e-090c-4b6c-ae1d-536dfe678b50 is in state STARTED 2025-09-29 06:09:40.538237 | orchestrator | 2025-09-29 06:09:40 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:40.538263 | orchestrator | 2025-09-29 06:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:43.588276 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:43.588346 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task bb41f95a-730f-4a0e-9647-24a8d5fb8b75 is in state STARTED 2025-09-29 06:09:43.588353 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:43.588357 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:43.588362 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task 69316f2e-090c-4b6c-ae1d-536dfe678b50 is in state STARTED 2025-09-29 06:09:43.588366 | orchestrator | 2025-09-29 06:09:43 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:43.588569 | orchestrator | 2025-09-29 06:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:46.610275 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:46.610377 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task bb41f95a-730f-4a0e-9647-24a8d5fb8b75 is in state SUCCESS 2025-09-29 06:09:46.611040 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:46.611244 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:46.611781 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task 69316f2e-090c-4b6c-ae1d-536dfe678b50 is in state SUCCESS 2025-09-29 06:09:46.612472 | orchestrator | 2025-09-29 06:09:46 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:46.612497 | orchestrator | 2025-09-29 06:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:49.637443 | orchestrator | 2025-09-29 06:09:49 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:49.639894 | orchestrator | 2025-09-29 06:09:49 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:49.639950 | orchestrator | 2025-09-29 06:09:49 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:49.641821 | orchestrator | 2025-09-29 06:09:49 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:49.641929 | orchestrator | 2025-09-29 06:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:52.690201 | orchestrator | 2025-09-29 06:09:52 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:52.691056 | orchestrator | 2025-09-29 06:09:52 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:52.691510 | orchestrator | 2025-09-29 06:09:52 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:52.692019 | orchestrator | 2025-09-29 06:09:52 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:52.692042 | orchestrator | 2025-09-29 06:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:55.724445 | orchestrator | 2025-09-29 06:09:55 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:55.726621 | orchestrator | 2025-09-29 06:09:55 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:55.728583 | orchestrator | 2025-09-29 06:09:55 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:55.730247 | orchestrator | 2025-09-29 06:09:55 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:55.730298 | orchestrator | 2025-09-29 06:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:09:58.773479 | orchestrator | 2025-09-29 06:09:58 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:09:58.773560 | orchestrator | 2025-09-29 06:09:58 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:09:58.774815 | orchestrator | 2025-09-29 06:09:58 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:09:58.775904 | orchestrator | 2025-09-29 06:09:58 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:09:58.775963 | orchestrator | 2025-09-29 06:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:01.818979 | orchestrator | 2025-09-29 06:10:01 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:01.820103 | orchestrator | 2025-09-29 06:10:01 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:01.821760 | orchestrator | 2025-09-29 06:10:01 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:01.825355 | orchestrator | 2025-09-29 06:10:01 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:01.825415 | orchestrator | 2025-09-29 06:10:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:04.861098 | orchestrator | 2025-09-29 06:10:04 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:04.861186 | orchestrator | 2025-09-29 06:10:04 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:04.864582 | orchestrator | 2025-09-29 06:10:04 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:04.865330 | orchestrator | 2025-09-29 06:10:04 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:04.865367 | orchestrator | 2025-09-29 06:10:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:07.944070 | orchestrator | 2025-09-29 06:10:07 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:07.947742 | orchestrator | 2025-09-29 06:10:07 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:07.950243 | orchestrator | 2025-09-29 06:10:07 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:07.953245 | orchestrator | 2025-09-29 06:10:07 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:07.953286 | orchestrator | 2025-09-29 06:10:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:10.990404 | orchestrator | 2025-09-29 06:10:10 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:10.990577 | orchestrator | 2025-09-29 06:10:10 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:10.994409 | orchestrator | 2025-09-29 06:10:10 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:10.996458 | orchestrator | 2025-09-29 06:10:10 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:10.996510 | orchestrator | 2025-09-29 06:10:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:14.028549 | orchestrator | 2025-09-29 06:10:14 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:14.029173 | orchestrator | 2025-09-29 06:10:14 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:14.029679 | orchestrator | 2025-09-29 06:10:14 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:14.032591 | orchestrator | 2025-09-29 06:10:14 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:14.032665 | orchestrator | 2025-09-29 06:10:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:17.065794 | orchestrator | 2025-09-29 06:10:17 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:17.066967 | orchestrator | 2025-09-29 06:10:17 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:17.068700 | orchestrator | 2025-09-29 06:10:17 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:17.069942 | orchestrator | 2025-09-29 06:10:17 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:17.069960 | orchestrator | 2025-09-29 06:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:20.105128 | orchestrator | 2025-09-29 06:10:20 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:20.106733 | orchestrator | 2025-09-29 06:10:20 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:20.108765 | orchestrator | 2025-09-29 06:10:20 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:20.110284 | orchestrator | 2025-09-29 06:10:20 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:20.110463 | orchestrator | 2025-09-29 06:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:23.149123 | orchestrator | 2025-09-29 06:10:23 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:23.151111 | orchestrator | 2025-09-29 06:10:23 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:23.153851 | orchestrator | 2025-09-29 06:10:23 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:23.156419 | orchestrator | 2025-09-29 06:10:23 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:23.156914 | orchestrator | 2025-09-29 06:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:26.194874 | orchestrator | 2025-09-29 06:10:26 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:26.194970 | orchestrator | 2025-09-29 06:10:26 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:26.197161 | orchestrator | 2025-09-29 06:10:26 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:26.197515 | orchestrator | 2025-09-29 06:10:26 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:26.197540 | orchestrator | 2025-09-29 06:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:29.234942 | orchestrator | 2025-09-29 06:10:29 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:29.236380 | orchestrator | 2025-09-29 06:10:29 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:29.239097 | orchestrator | 2025-09-29 06:10:29 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:29.241117 | orchestrator | 2025-09-29 06:10:29 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:29.241433 | orchestrator | 2025-09-29 06:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:32.277334 | orchestrator | 2025-09-29 06:10:32 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:32.277440 | orchestrator | 2025-09-29 06:10:32 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:32.278616 | orchestrator | 2025-09-29 06:10:32 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:32.280421 | orchestrator | 2025-09-29 06:10:32 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:32.280468 | orchestrator | 2025-09-29 06:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:35.306115 | orchestrator | 2025-09-29 06:10:35 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:35.307819 | orchestrator | 2025-09-29 06:10:35 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:35.308854 | orchestrator | 2025-09-29 06:10:35 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:35.310827 | orchestrator | 2025-09-29 06:10:35 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:35.311140 | orchestrator | 2025-09-29 06:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:38.340800 | orchestrator | 2025-09-29 06:10:38 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:38.341971 | orchestrator | 2025-09-29 06:10:38 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:38.343694 | orchestrator | 2025-09-29 06:10:38 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:38.345215 | orchestrator | 2025-09-29 06:10:38 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:38.345284 | orchestrator | 2025-09-29 06:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:41.408347 | orchestrator | 2025-09-29 06:10:41 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:41.408606 | orchestrator | 2025-09-29 06:10:41 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:41.409410 | orchestrator | 2025-09-29 06:10:41 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:41.410461 | orchestrator | 2025-09-29 06:10:41 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:41.410508 | orchestrator | 2025-09-29 06:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:44.454628 | orchestrator | 2025-09-29 06:10:44 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:44.456651 | orchestrator | 2025-09-29 06:10:44 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:44.459112 | orchestrator | 2025-09-29 06:10:44 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:44.460883 | orchestrator | 2025-09-29 06:10:44 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:44.461106 | orchestrator | 2025-09-29 06:10:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:47.506476 | orchestrator | 2025-09-29 06:10:47 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:47.510827 | orchestrator | 2025-09-29 06:10:47 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:47.513555 | orchestrator | 2025-09-29 06:10:47 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:47.515376 | orchestrator | 2025-09-29 06:10:47 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:47.515486 | orchestrator | 2025-09-29 06:10:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:50.571367 | orchestrator | 2025-09-29 06:10:50 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:50.571938 | orchestrator | 2025-09-29 06:10:50 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:50.572961 | orchestrator | 2025-09-29 06:10:50 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:50.574951 | orchestrator | 2025-09-29 06:10:50 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:50.574975 | orchestrator | 2025-09-29 06:10:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:53.618271 | orchestrator | 2025-09-29 06:10:53 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:53.619775 | orchestrator | 2025-09-29 06:10:53 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:53.622200 | orchestrator | 2025-09-29 06:10:53 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:53.623299 | orchestrator | 2025-09-29 06:10:53 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:53.623330 | orchestrator | 2025-09-29 06:10:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:56.665576 | orchestrator | 2025-09-29 06:10:56 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:56.665931 | orchestrator | 2025-09-29 06:10:56 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:56.666815 | orchestrator | 2025-09-29 06:10:56 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:56.667738 | orchestrator | 2025-09-29 06:10:56 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:56.667846 | orchestrator | 2025-09-29 06:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:10:59.696575 | orchestrator | 2025-09-29 06:10:59 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:10:59.698829 | orchestrator | 2025-09-29 06:10:59 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:10:59.699327 | orchestrator | 2025-09-29 06:10:59 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:10:59.700164 | orchestrator | 2025-09-29 06:10:59 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:10:59.700218 | orchestrator | 2025-09-29 06:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:02.729861 | orchestrator | 2025-09-29 06:11:02 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:02.731573 | orchestrator | 2025-09-29 06:11:02 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:11:02.734341 | orchestrator | 2025-09-29 06:11:02 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:02.735456 | orchestrator | 2025-09-29 06:11:02 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:02.735724 | orchestrator | 2025-09-29 06:11:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:05.767062 | orchestrator | 2025-09-29 06:11:05 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:05.768564 | orchestrator | 2025-09-29 06:11:05 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:11:05.772378 | orchestrator | 2025-09-29 06:11:05 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:05.772824 | orchestrator | 2025-09-29 06:11:05 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:05.772851 | orchestrator | 2025-09-29 06:11:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:08.818153 | orchestrator | 2025-09-29 06:11:08 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:08.819956 | orchestrator | 2025-09-29 06:11:08 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state STARTED 2025-09-29 06:11:08.824292 | orchestrator | 2025-09-29 06:11:08 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:08.824333 | orchestrator | 2025-09-29 06:11:08 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:08.824346 | orchestrator | 2025-09-29 06:11:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:11.860099 | orchestrator | 2025-09-29 06:11:11 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:11.861035 | orchestrator | 2025-09-29 06:11:11 | INFO  | Task b4c295f1-d32e-450f-bc31-8d76827aedc4 is in state SUCCESS 2025-09-29 06:11:11.861775 | orchestrator | 2025-09-29 06:11:11.861805 | orchestrator | 2025-09-29 06:11:11.861817 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-29 06:11:11.861827 | orchestrator | 2025-09-29 06:11:11.861837 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-29 06:11:11.861847 | orchestrator | Monday 29 September 2025 06:09:40 +0000 (0:00:00.196) 0:00:00.196 ****** 2025-09-29 06:11:11.861857 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-29 06:11:11.861867 | orchestrator | 2025-09-29 06:11:11.861876 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-29 06:11:11.861886 | orchestrator | Monday 29 September 2025 06:09:41 +0000 (0:00:00.911) 0:00:01.108 ****** 2025-09-29 06:11:11.861895 | orchestrator | changed: [testbed-manager] 2025-09-29 06:11:11.861905 | orchestrator | 2025-09-29 06:11:11.861915 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-29 06:11:11.861924 | orchestrator | Monday 29 September 2025 06:09:43 +0000 (0:00:01.291) 0:00:02.400 ****** 2025-09-29 06:11:11.861934 | orchestrator | changed: [testbed-manager] 2025-09-29 06:11:11.861943 | orchestrator | 2025-09-29 06:11:11.861953 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:11:11.861962 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:11:11.861974 | orchestrator | 2025-09-29 06:11:11.861983 | orchestrator | 2025-09-29 06:11:11.861993 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:11:11.862002 | orchestrator | Monday 29 September 2025 06:09:43 +0000 (0:00:00.383) 0:00:02.783 ****** 2025-09-29 06:11:11.862012 | orchestrator | =============================================================================== 2025-09-29 06:11:11.862078 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2025-09-29 06:11:11.862096 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.91s 2025-09-29 06:11:11.862112 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2025-09-29 06:11:11.862128 | orchestrator | 2025-09-29 06:11:11.862145 | orchestrator | 2025-09-29 06:11:11.862161 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-29 06:11:11.862172 | orchestrator | 2025-09-29 06:11:11.862181 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-29 06:11:11.862264 | orchestrator | Monday 29 September 2025 06:09:39 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-29 06:11:11.862273 | orchestrator | ok: [testbed-manager] 2025-09-29 06:11:11.862283 | orchestrator | 2025-09-29 06:11:11.862293 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-29 06:11:11.862302 | orchestrator | Monday 29 September 2025 06:09:39 +0000 (0:00:00.459) 0:00:00.607 ****** 2025-09-29 06:11:11.862312 | orchestrator | ok: [testbed-manager] 2025-09-29 06:11:11.862321 | orchestrator | 2025-09-29 06:11:11.862331 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-29 06:11:11.862341 | orchestrator | Monday 29 September 2025 06:09:40 +0000 (0:00:00.459) 0:00:01.067 ****** 2025-09-29 06:11:11.862352 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-29 06:11:11.862364 | orchestrator | 2025-09-29 06:11:11.862375 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-29 06:11:11.862385 | orchestrator | Monday 29 September 2025 06:09:41 +0000 (0:00:00.731) 0:00:01.798 ****** 2025-09-29 06:11:11.862396 | orchestrator | changed: [testbed-manager] 2025-09-29 06:11:11.862407 | orchestrator | 2025-09-29 06:11:11.862434 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-29 06:11:11.862446 | orchestrator | Monday 29 September 2025 06:09:42 +0000 (0:00:01.297) 0:00:03.095 ****** 2025-09-29 06:11:11.862457 | orchestrator | changed: [testbed-manager] 2025-09-29 06:11:11.862482 | orchestrator | 2025-09-29 06:11:11.862493 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-29 06:11:11.862504 | orchestrator | Monday 29 September 2025 06:09:43 +0000 (0:00:00.911) 0:00:04.007 ****** 2025-09-29 06:11:11.862515 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-29 06:11:11.862526 | orchestrator | 2025-09-29 06:11:11.862537 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-29 06:11:11.862549 | orchestrator | Monday 29 September 2025 06:09:44 +0000 (0:00:01.556) 0:00:05.564 ****** 2025-09-29 06:11:11.862559 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-29 06:11:11.862571 | orchestrator | 2025-09-29 06:11:11.862582 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-29 06:11:11.862592 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:00.613) 0:00:06.177 ****** 2025-09-29 06:11:11.862603 | orchestrator | ok: [testbed-manager] 2025-09-29 06:11:11.862614 | orchestrator | 2025-09-29 06:11:11.862625 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-29 06:11:11.862636 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:00.315) 0:00:06.493 ****** 2025-09-29 06:11:11.862647 | orchestrator | ok: [testbed-manager] 2025-09-29 06:11:11.862658 | orchestrator | 2025-09-29 06:11:11.862669 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:11:11.862680 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:11:11.862692 | orchestrator | 2025-09-29 06:11:11.862728 | orchestrator | 2025-09-29 06:11:11.862738 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:11:11.862748 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:00.237) 0:00:06.730 ****** 2025-09-29 06:11:11.862757 | orchestrator | =============================================================================== 2025-09-29 06:11:11.862767 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.56s 2025-09-29 06:11:11.862776 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.30s 2025-09-29 06:11:11.862786 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.91s 2025-09-29 06:11:11.862808 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.73s 2025-09-29 06:11:11.862818 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.61s 2025-09-29 06:11:11.862828 | orchestrator | Get home directory of operator user ------------------------------------- 0.46s 2025-09-29 06:11:11.862838 | orchestrator | Create .kube directory -------------------------------------------------- 0.46s 2025-09-29 06:11:11.862847 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2025-09-29 06:11:11.862856 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.24s 2025-09-29 06:11:11.862866 | orchestrator | 2025-09-29 06:11:11.862875 | orchestrator | 2025-09-29 06:11:11.862885 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-29 06:11:11.862894 | orchestrator | 2025-09-29 06:11:11.862903 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-29 06:11:11.862913 | orchestrator | Monday 29 September 2025 06:08:49 +0000 (0:00:00.127) 0:00:00.127 ****** 2025-09-29 06:11:11.862923 | orchestrator | ok: [localhost] => { 2025-09-29 06:11:11.862933 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-29 06:11:11.862943 | orchestrator | } 2025-09-29 06:11:11.862952 | orchestrator | 2025-09-29 06:11:11.862962 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-29 06:11:11.862971 | orchestrator | Monday 29 September 2025 06:08:49 +0000 (0:00:00.068) 0:00:00.195 ****** 2025-09-29 06:11:11.862982 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-29 06:11:11.863000 | orchestrator | ...ignoring 2025-09-29 06:11:11.863010 | orchestrator | 2025-09-29 06:11:11.863019 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-29 06:11:11.863029 | orchestrator | Monday 29 September 2025 06:08:52 +0000 (0:00:03.484) 0:00:03.680 ****** 2025-09-29 06:11:11.863038 | orchestrator | skipping: [localhost] 2025-09-29 06:11:11.863048 | orchestrator | 2025-09-29 06:11:11.863057 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-29 06:11:11.863067 | orchestrator | Monday 29 September 2025 06:08:52 +0000 (0:00:00.042) 0:00:03.722 ****** 2025-09-29 06:11:11.863076 | orchestrator | ok: [localhost] 2025-09-29 06:11:11.863085 | orchestrator | 2025-09-29 06:11:11.863095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:11:11.863104 | orchestrator | 2025-09-29 06:11:11.863114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:11:11.863123 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:00.153) 0:00:03.875 ****** 2025-09-29 06:11:11.863133 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:11.863142 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:11.863152 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:11.863161 | orchestrator | 2025-09-29 06:11:11.863171 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:11:11.863180 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:00.377) 0:00:04.253 ****** 2025-09-29 06:11:11.863190 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-29 06:11:11.863200 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-29 06:11:11.863209 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-29 06:11:11.863218 | orchestrator | 2025-09-29 06:11:11.863233 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-29 06:11:11.863242 | orchestrator | 2025-09-29 06:11:11.863252 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-29 06:11:11.863262 | orchestrator | Monday 29 September 2025 06:08:54 +0000 (0:00:00.669) 0:00:04.922 ****** 2025-09-29 06:11:11.863271 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:11.863281 | orchestrator | 2025-09-29 06:11:11.863290 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-29 06:11:11.863300 | orchestrator | Monday 29 September 2025 06:08:54 +0000 (0:00:00.725) 0:00:05.648 ****** 2025-09-29 06:11:11.863309 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:11.863319 | orchestrator | 2025-09-29 06:11:11.863328 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-29 06:11:11.863337 | orchestrator | Monday 29 September 2025 06:08:55 +0000 (0:00:00.951) 0:00:06.599 ****** 2025-09-29 06:11:11.863347 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863356 | orchestrator | 2025-09-29 06:11:11.863366 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-29 06:11:11.863375 | orchestrator | Monday 29 September 2025 06:08:56 +0000 (0:00:00.507) 0:00:07.107 ****** 2025-09-29 06:11:11.863385 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863394 | orchestrator | 2025-09-29 06:11:11.863403 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-29 06:11:11.863413 | orchestrator | Monday 29 September 2025 06:08:56 +0000 (0:00:00.583) 0:00:07.690 ****** 2025-09-29 06:11:11.863422 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863432 | orchestrator | 2025-09-29 06:11:11.863441 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-29 06:11:11.863450 | orchestrator | Monday 29 September 2025 06:08:57 +0000 (0:00:00.396) 0:00:08.086 ****** 2025-09-29 06:11:11.863460 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863469 | orchestrator | 2025-09-29 06:11:11.863479 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-29 06:11:11.863488 | orchestrator | Monday 29 September 2025 06:08:57 +0000 (0:00:00.636) 0:00:08.723 ****** 2025-09-29 06:11:11.863504 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:11.863514 | orchestrator | 2025-09-29 06:11:11.863523 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-29 06:11:11.863538 | orchestrator | Monday 29 September 2025 06:08:59 +0000 (0:00:01.337) 0:00:10.060 ****** 2025-09-29 06:11:11.863548 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:11.863557 | orchestrator | 2025-09-29 06:11:11.863567 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-29 06:11:11.863576 | orchestrator | Monday 29 September 2025 06:09:00 +0000 (0:00:01.310) 0:00:11.371 ****** 2025-09-29 06:11:11.863586 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863595 | orchestrator | 2025-09-29 06:11:11.863605 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-29 06:11:11.863614 | orchestrator | Monday 29 September 2025 06:09:02 +0000 (0:00:01.808) 0:00:13.180 ****** 2025-09-29 06:11:11.863624 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.863633 | orchestrator | 2025-09-29 06:11:11.863643 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-29 06:11:11.863652 | orchestrator | Monday 29 September 2025 06:09:03 +0000 (0:00:00.684) 0:00:13.865 ****** 2025-09-29 06:11:11.863667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863729 | orchestrator | 2025-09-29 06:11:11.863739 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-29 06:11:11.863749 | orchestrator | Monday 29 September 2025 06:09:04 +0000 (0:00:01.333) 0:00:15.198 ****** 2025-09-29 06:11:11.863767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.863889 | orchestrator | 2025-09-29 06:11:11.863899 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-29 06:11:11.863915 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:01.775) 0:00:16.974 ****** 2025-09-29 06:11:11.863925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-29 06:11:11.863935 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-29 06:11:11.863944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-29 06:11:11.863954 | orchestrator | 2025-09-29 06:11:11.863963 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-29 06:11:11.863972 | orchestrator | Monday 29 September 2025 06:09:07 +0000 (0:00:01.638) 0:00:18.613 ****** 2025-09-29 06:11:11.863982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-29 06:11:11.863991 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-29 06:11:11.864001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-29 06:11:11.864010 | orchestrator | 2025-09-29 06:11:11.864019 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-29 06:11:11.864037 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:03.067) 0:00:21.680 ****** 2025-09-29 06:11:11.864046 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-29 06:11:11.864056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-29 06:11:11.864066 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-29 06:11:11.864075 | orchestrator | 2025-09-29 06:11:11.864085 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-29 06:11:11.864094 | orchestrator | Monday 29 September 2025 06:09:12 +0000 (0:00:01.879) 0:00:23.559 ****** 2025-09-29 06:11:11.864103 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-29 06:11:11.864113 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-29 06:11:11.864123 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-29 06:11:11.864132 | orchestrator | 2025-09-29 06:11:11.864142 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-29 06:11:11.864151 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:02.142) 0:00:25.702 ****** 2025-09-29 06:11:11.864160 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-29 06:11:11.864170 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-29 06:11:11.864179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-29 06:11:11.864189 | orchestrator | 2025-09-29 06:11:11.864198 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-29 06:11:11.864208 | orchestrator | Monday 29 September 2025 06:09:16 +0000 (0:00:01.630) 0:00:27.333 ****** 2025-09-29 06:11:11.864217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-29 06:11:11.864227 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-29 06:11:11.864236 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-29 06:11:11.864246 | orchestrator | 2025-09-29 06:11:11.864255 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-29 06:11:11.864265 | orchestrator | Monday 29 September 2025 06:09:18 +0000 (0:00:01.632) 0:00:28.965 ****** 2025-09-29 06:11:11.864274 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.864284 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:11.864299 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:11.864308 | orchestrator | 2025-09-29 06:11:11.864318 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-29 06:11:11.864327 | orchestrator | Monday 29 September 2025 06:09:18 +0000 (0:00:00.342) 0:00:29.308 ****** 2025-09-29 06:11:11.864342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.864360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.864371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:11:11.864382 | orchestrator | 2025-09-29 06:11:11.864391 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-29 06:11:11.864401 | orchestrator | Monday 29 September 2025 06:09:20 +0000 (0:00:01.641) 0:00:30.950 ****** 2025-09-29 06:11:11.864410 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:11.864420 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:11.864429 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:11.864439 | orchestrator | 2025-09-29 06:11:11.864448 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-29 06:11:11.864462 | orchestrator | Monday 29 September 2025 06:09:21 +0000 (0:00:01.057) 0:00:32.007 ****** 2025-09-29 06:11:11.864471 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:11.864481 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:11.864490 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:11.864499 | orchestrator | 2025-09-29 06:11:11.864509 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-29 06:11:11.864518 | orchestrator | Monday 29 September 2025 06:09:29 +0000 (0:00:08.585) 0:00:40.593 ****** 2025-09-29 06:11:11.864528 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:11.864537 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:11.864547 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:11.864556 | orchestrator | 2025-09-29 06:11:11.864566 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-29 06:11:11.864575 | orchestrator | 2025-09-29 06:11:11.864585 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-29 06:11:11.864594 | orchestrator | Monday 29 September 2025 06:09:30 +0000 (0:00:00.431) 0:00:41.024 ****** 2025-09-29 06:11:11.864603 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:11.864613 | orchestrator | 2025-09-29 06:11:11.864622 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-29 06:11:11.864636 | orchestrator | Monday 29 September 2025 06:09:30 +0000 (0:00:00.658) 0:00:41.682 ****** 2025-09-29 06:11:11.864645 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:11.864655 | orchestrator | 2025-09-29 06:11:11.864664 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-29 06:11:11.864674 | orchestrator | Monday 29 September 2025 06:09:31 +0000 (0:00:00.613) 0:00:42.296 ****** 2025-09-29 06:11:11.864683 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:11.864692 | orchestrator | 2025-09-29 06:11:11.864732 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-29 06:11:11.864742 | orchestrator | Monday 29 September 2025 06:09:38 +0000 (0:00:06.946) 0:00:49.243 ****** 2025-09-29 06:11:11.864751 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:11.864761 | orchestrator | 2025-09-29 06:11:11.864771 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-29 06:11:11.864780 | orchestrator | 2025-09-29 06:11:11.864790 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-29 06:11:11.864799 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:52.535) 0:01:41.779 ****** 2025-09-29 06:11:11.864809 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:11.864818 | orchestrator | 2025-09-29 06:11:11.864828 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-29 06:11:11.864837 | orchestrator | Monday 29 September 2025 06:10:31 +0000 (0:00:00.646) 0:01:42.426 ****** 2025-09-29 06:11:11.864847 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:11.864856 | orchestrator | 2025-09-29 06:11:11.864866 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-29 06:11:11.864875 | orchestrator | Monday 29 September 2025 06:10:31 +0000 (0:00:00.379) 0:01:42.805 ****** 2025-09-29 06:11:11.864885 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:11.864894 | orchestrator | 2025-09-29 06:11:11.864904 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-29 06:11:11.864914 | orchestrator | Monday 29 September 2025 06:10:33 +0000 (0:00:01.868) 0:01:44.673 ****** 2025-09-29 06:11:11.864923 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:11.864933 | orchestrator | 2025-09-29 06:11:11.864942 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-29 06:11:11.864952 | orchestrator | 2025-09-29 06:11:11.864961 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-29 06:11:11.864971 | orchestrator | Monday 29 September 2025 06:10:49 +0000 (0:00:15.491) 0:02:00.164 ****** 2025-09-29 06:11:11.864980 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:11.864996 | orchestrator | 2025-09-29 06:11:11.865011 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-29 06:11:11.865021 | orchestrator | Monday 29 September 2025 06:10:49 +0000 (0:00:00.674) 0:02:00.839 ****** 2025-09-29 06:11:11.865031 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:11.865040 | orchestrator | 2025-09-29 06:11:11.865050 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-29 06:11:11.865059 | orchestrator | Monday 29 September 2025 06:10:50 +0000 (0:00:00.236) 0:02:01.075 ****** 2025-09-29 06:11:11.865069 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:11.865078 | orchestrator | 2025-09-29 06:11:11.865088 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-29 06:11:11.865097 | orchestrator | Monday 29 September 2025 06:10:51 +0000 (0:00:01.626) 0:02:02.702 ****** 2025-09-29 06:11:11.865107 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:11.865116 | orchestrator | 2025-09-29 06:11:11.865126 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-29 06:11:11.865135 | orchestrator | 2025-09-29 06:11:11.865145 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-29 06:11:11.865155 | orchestrator | Monday 29 September 2025 06:11:07 +0000 (0:00:15.882) 0:02:18.584 ****** 2025-09-29 06:11:11.865164 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:11.865174 | orchestrator | 2025-09-29 06:11:11.865183 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-29 06:11:11.865193 | orchestrator | Monday 29 September 2025 06:11:08 +0000 (0:00:00.803) 0:02:19.387 ****** 2025-09-29 06:11:11.865202 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-29 06:11:11.865212 | orchestrator | enable_outward_rabbitmq_True 2025-09-29 06:11:11.865221 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-29 06:11:11.865231 | orchestrator | outward_rabbitmq_restart 2025-09-29 06:11:11.865240 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:11.865250 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:11.865260 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:11.865269 | orchestrator | 2025-09-29 06:11:11.865279 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-29 06:11:11.865288 | orchestrator | skipping: no hosts matched 2025-09-29 06:11:11.865298 | orchestrator | 2025-09-29 06:11:11.865308 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-29 06:11:11.865317 | orchestrator | skipping: no hosts matched 2025-09-29 06:11:11.865327 | orchestrator | 2025-09-29 06:11:11.865336 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-29 06:11:11.865346 | orchestrator | skipping: no hosts matched 2025-09-29 06:11:11.865355 | orchestrator | 2025-09-29 06:11:11.865365 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:11:11.865379 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-29 06:11:11.865395 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-29 06:11:11.865412 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:11:11.865433 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:11:11.865449 | orchestrator | 2025-09-29 06:11:11.865464 | orchestrator | 2025-09-29 06:11:11.865480 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:11:11.865496 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:02.926) 0:02:22.314 ****** 2025-09-29 06:11:11.865512 | orchestrator | =============================================================================== 2025-09-29 06:11:11.865538 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.91s 2025-09-29 06:11:11.865548 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.44s 2025-09-29 06:11:11.865558 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.59s 2025-09-29 06:11:11.865567 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.48s 2025-09-29 06:11:11.865577 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.07s 2025-09-29 06:11:11.865586 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.93s 2025-09-29 06:11:11.865596 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.14s 2025-09-29 06:11:11.865605 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.98s 2025-09-29 06:11:11.865614 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.88s 2025-09-29 06:11:11.865624 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.81s 2025-09-29 06:11:11.865633 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.78s 2025-09-29 06:11:11.865643 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.64s 2025-09-29 06:11:11.865652 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.64s 2025-09-29 06:11:11.865662 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.63s 2025-09-29 06:11:11.865671 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.63s 2025-09-29 06:11:11.865681 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.34s 2025-09-29 06:11:11.865690 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.33s 2025-09-29 06:11:11.865733 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.31s 2025-09-29 06:11:11.865743 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.23s 2025-09-29 06:11:11.865752 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.06s 2025-09-29 06:11:11.865762 | orchestrator | 2025-09-29 06:11:11 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:11.865875 | orchestrator | 2025-09-29 06:11:11 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:11.865888 | orchestrator | 2025-09-29 06:11:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:14.912381 | orchestrator | 2025-09-29 06:11:14 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:14.913547 | orchestrator | 2025-09-29 06:11:14 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:14.914588 | orchestrator | 2025-09-29 06:11:14 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:14.914636 | orchestrator | 2025-09-29 06:11:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:17.951387 | orchestrator | 2025-09-29 06:11:17 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:17.954455 | orchestrator | 2025-09-29 06:11:17 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:17.954984 | orchestrator | 2025-09-29 06:11:17 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:17.955018 | orchestrator | 2025-09-29 06:11:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:21.012955 | orchestrator | 2025-09-29 06:11:21 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:21.014539 | orchestrator | 2025-09-29 06:11:21 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:21.016396 | orchestrator | 2025-09-29 06:11:21 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:21.016447 | orchestrator | 2025-09-29 06:11:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:24.052862 | orchestrator | 2025-09-29 06:11:24 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:24.054414 | orchestrator | 2025-09-29 06:11:24 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:24.056061 | orchestrator | 2025-09-29 06:11:24 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:24.056112 | orchestrator | 2025-09-29 06:11:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:27.090384 | orchestrator | 2025-09-29 06:11:27 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:27.092198 | orchestrator | 2025-09-29 06:11:27 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:27.094081 | orchestrator | 2025-09-29 06:11:27 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:27.094134 | orchestrator | 2025-09-29 06:11:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:30.140050 | orchestrator | 2025-09-29 06:11:30 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:30.142046 | orchestrator | 2025-09-29 06:11:30 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:30.143683 | orchestrator | 2025-09-29 06:11:30 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:30.143727 | orchestrator | 2025-09-29 06:11:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:33.189948 | orchestrator | 2025-09-29 06:11:33 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:33.191185 | orchestrator | 2025-09-29 06:11:33 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:33.192281 | orchestrator | 2025-09-29 06:11:33 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:33.192316 | orchestrator | 2025-09-29 06:11:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:36.233444 | orchestrator | 2025-09-29 06:11:36 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:36.233558 | orchestrator | 2025-09-29 06:11:36 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:36.234239 | orchestrator | 2025-09-29 06:11:36 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:36.234553 | orchestrator | 2025-09-29 06:11:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:39.288860 | orchestrator | 2025-09-29 06:11:39 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:39.289533 | orchestrator | 2025-09-29 06:11:39 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state STARTED 2025-09-29 06:11:39.290716 | orchestrator | 2025-09-29 06:11:39 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:39.290778 | orchestrator | 2025-09-29 06:11:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:42.328658 | orchestrator | 2025-09-29 06:11:42 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:42.332024 | orchestrator | 2025-09-29 06:11:42.332112 | orchestrator | 2025-09-29 06:11:42.332127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:11:42.332138 | orchestrator | 2025-09-29 06:11:42.332156 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:11:42.332244 | orchestrator | Monday 29 September 2025 06:09:37 +0000 (0:00:00.197) 0:00:00.197 ****** 2025-09-29 06:11:42.332268 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:11:42.332290 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:11:42.332310 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:11:42.332331 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.332349 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.332368 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.332379 | orchestrator | 2025-09-29 06:11:42.332390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:11:42.332401 | orchestrator | Monday 29 September 2025 06:09:39 +0000 (0:00:01.128) 0:00:01.325 ****** 2025-09-29 06:11:42.332411 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-29 06:11:42.332424 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-29 06:11:42.332435 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-29 06:11:42.332446 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-29 06:11:42.332456 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-29 06:11:42.332467 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-29 06:11:42.332477 | orchestrator | 2025-09-29 06:11:42.332488 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-29 06:11:42.332498 | orchestrator | 2025-09-29 06:11:42.332509 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-29 06:11:42.332519 | orchestrator | Monday 29 September 2025 06:09:40 +0000 (0:00:01.086) 0:00:02.411 ****** 2025-09-29 06:11:42.332531 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:42.332543 | orchestrator | 2025-09-29 06:11:42.332554 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-29 06:11:42.332565 | orchestrator | Monday 29 September 2025 06:09:42 +0000 (0:00:01.966) 0:00:04.378 ****** 2025-09-29 06:11:42.332596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332626 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332747 | orchestrator | 2025-09-29 06:11:42.332779 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-29 06:11:42.332792 | orchestrator | Monday 29 September 2025 06:09:44 +0000 (0:00:02.361) 0:00:06.739 ****** 2025-09-29 06:11:42.332805 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332817 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.332917 | orchestrator | 2025-09-29 06:11:42.332936 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-29 06:11:42.332954 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:01.566) 0:00:08.306 ****** 2025-09-29 06:11:42.332973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333083 | orchestrator | 2025-09-29 06:11:42.333093 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-29 06:11:42.333104 | orchestrator | Monday 29 September 2025 06:09:47 +0000 (0:00:01.189) 0:00:09.495 ****** 2025-09-29 06:11:42.333120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333142 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333193 | orchestrator | 2025-09-29 06:11:42.333209 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-29 06:11:42.333221 | orchestrator | Monday 29 September 2025 06:09:48 +0000 (0:00:01.416) 0:00:10.912 ****** 2025-09-29 06:11:42.333232 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333253 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.333309 | orchestrator | 2025-09-29 06:11:42.333328 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-29 06:11:42.333346 | orchestrator | Monday 29 September 2025 06:09:49 +0000 (0:00:01.277) 0:00:12.189 ****** 2025-09-29 06:11:42.333363 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:11:42.333381 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:11:42.333397 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:11:42.333414 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.333431 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.333449 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.333468 | orchestrator | 2025-09-29 06:11:42.333487 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-29 06:11:42.333504 | orchestrator | Monday 29 September 2025 06:09:52 +0000 (0:00:02.409) 0:00:14.598 ****** 2025-09-29 06:11:42.333523 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-29 06:11:42.333542 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-29 06:11:42.333559 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-29 06:11:42.333577 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-29 06:11:42.333595 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-29 06:11:42.333614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-29 06:11:42.333633 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333652 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333673 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333720 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333733 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333744 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-29 06:11:42.333755 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333767 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333788 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333810 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-29 06:11:42.333821 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333833 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333852 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333863 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333890 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-29 06:11:42.333901 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333912 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333922 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333932 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333943 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333953 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.333964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-29 06:11:42.333974 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.333985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.333995 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.334006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.334063 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-29 06:11:42.334078 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-29 06:11:42.334088 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-29 06:11:42.334099 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-29 06:11:42.334110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-29 06:11:42.334121 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-29 06:11:42.334131 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-29 06:11:42.334142 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-29 06:11:42.334153 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-29 06:11:42.334171 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-29 06:11:42.334182 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-29 06:11:42.334193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-29 06:11:42.334204 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-29 06:11:42.334215 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-29 06:11:42.334233 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-29 06:11:42.334244 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-29 06:11:42.334254 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-29 06:11:42.334265 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-29 06:11:42.334276 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-29 06:11:42.334286 | orchestrator | 2025-09-29 06:11:42.334297 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334308 | orchestrator | Monday 29 September 2025 06:10:12 +0000 (0:00:20.559) 0:00:35.157 ****** 2025-09-29 06:11:42.334319 | orchestrator | 2025-09-29 06:11:42.334330 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334340 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.170) 0:00:35.328 ****** 2025-09-29 06:11:42.334351 | orchestrator | 2025-09-29 06:11:42.334361 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334372 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.058) 0:00:35.387 ****** 2025-09-29 06:11:42.334383 | orchestrator | 2025-09-29 06:11:42.334399 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334409 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.069) 0:00:35.457 ****** 2025-09-29 06:11:42.334420 | orchestrator | 2025-09-29 06:11:42.334431 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334442 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.064) 0:00:35.521 ****** 2025-09-29 06:11:42.334452 | orchestrator | 2025-09-29 06:11:42.334463 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-29 06:11:42.334473 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.061) 0:00:35.583 ****** 2025-09-29 06:11:42.334484 | orchestrator | 2025-09-29 06:11:42.334495 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-29 06:11:42.334505 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:00.062) 0:00:35.645 ****** 2025-09-29 06:11:42.334516 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:11:42.334527 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:11:42.334538 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:11:42.334548 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.334559 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.334570 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.334580 | orchestrator | 2025-09-29 06:11:42.334591 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-29 06:11:42.334602 | orchestrator | Monday 29 September 2025 06:10:14 +0000 (0:00:01.450) 0:00:37.096 ****** 2025-09-29 06:11:42.334613 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:11:42.334631 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:11:42.334649 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:11:42.334667 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.334709 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.334728 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.334746 | orchestrator | 2025-09-29 06:11:42.334763 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-29 06:11:42.334781 | orchestrator | 2025-09-29 06:11:42.334800 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-29 06:11:42.334816 | orchestrator | Monday 29 September 2025 06:10:23 +0000 (0:00:08.399) 0:00:45.496 ****** 2025-09-29 06:11:42.334827 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:42.334847 | orchestrator | 2025-09-29 06:11:42.334857 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-29 06:11:42.334868 | orchestrator | Monday 29 September 2025 06:10:23 +0000 (0:00:00.723) 0:00:46.220 ****** 2025-09-29 06:11:42.334883 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:42.334901 | orchestrator | 2025-09-29 06:11:42.334920 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-29 06:11:42.334940 | orchestrator | Monday 29 September 2025 06:10:24 +0000 (0:00:00.548) 0:00:46.768 ****** 2025-09-29 06:11:42.334959 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.334978 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.334990 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.335001 | orchestrator | 2025-09-29 06:11:42.335011 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-29 06:11:42.335022 | orchestrator | Monday 29 September 2025 06:10:25 +0000 (0:00:01.016) 0:00:47.784 ****** 2025-09-29 06:11:42.335038 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.335057 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.335074 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.335099 | orchestrator | 2025-09-29 06:11:42.335117 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-29 06:11:42.335135 | orchestrator | Monday 29 September 2025 06:10:25 +0000 (0:00:00.302) 0:00:48.087 ****** 2025-09-29 06:11:42.335154 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.335174 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.335191 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.335210 | orchestrator | 2025-09-29 06:11:42.335221 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-29 06:11:42.335232 | orchestrator | Monday 29 September 2025 06:10:26 +0000 (0:00:00.383) 0:00:48.471 ****** 2025-09-29 06:11:42.335243 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.335253 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.335263 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.335274 | orchestrator | 2025-09-29 06:11:42.335285 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-29 06:11:42.335295 | orchestrator | Monday 29 September 2025 06:10:26 +0000 (0:00:00.319) 0:00:48.790 ****** 2025-09-29 06:11:42.335306 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.335317 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.335327 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.335338 | orchestrator | 2025-09-29 06:11:42.335348 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-29 06:11:42.335359 | orchestrator | Monday 29 September 2025 06:10:27 +0000 (0:00:00.536) 0:00:49.326 ****** 2025-09-29 06:11:42.335370 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335380 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335391 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335402 | orchestrator | 2025-09-29 06:11:42.335412 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-29 06:11:42.335423 | orchestrator | Monday 29 September 2025 06:10:27 +0000 (0:00:00.295) 0:00:49.621 ****** 2025-09-29 06:11:42.335433 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335444 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335454 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335465 | orchestrator | 2025-09-29 06:11:42.335476 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-29 06:11:42.335486 | orchestrator | Monday 29 September 2025 06:10:27 +0000 (0:00:00.298) 0:00:49.920 ****** 2025-09-29 06:11:42.335497 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335507 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335518 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335529 | orchestrator | 2025-09-29 06:11:42.335539 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-29 06:11:42.335566 | orchestrator | Monday 29 September 2025 06:10:27 +0000 (0:00:00.300) 0:00:50.220 ****** 2025-09-29 06:11:42.335582 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335601 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335618 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335636 | orchestrator | 2025-09-29 06:11:42.335654 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-29 06:11:42.335674 | orchestrator | Monday 29 September 2025 06:10:28 +0000 (0:00:00.521) 0:00:50.742 ****** 2025-09-29 06:11:42.335716 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335737 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335755 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335773 | orchestrator | 2025-09-29 06:11:42.335791 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-29 06:11:42.335810 | orchestrator | Monday 29 September 2025 06:10:28 +0000 (0:00:00.308) 0:00:51.051 ****** 2025-09-29 06:11:42.335829 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335849 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335867 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335886 | orchestrator | 2025-09-29 06:11:42.335904 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-29 06:11:42.335923 | orchestrator | Monday 29 September 2025 06:10:29 +0000 (0:00:00.311) 0:00:51.362 ****** 2025-09-29 06:11:42.335942 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.335960 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.335979 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.335996 | orchestrator | 2025-09-29 06:11:42.336015 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-29 06:11:42.336033 | orchestrator | Monday 29 September 2025 06:10:29 +0000 (0:00:00.276) 0:00:51.639 ****** 2025-09-29 06:11:42.336051 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336070 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336089 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336107 | orchestrator | 2025-09-29 06:11:42.336126 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-29 06:11:42.336143 | orchestrator | Monday 29 September 2025 06:10:29 +0000 (0:00:00.271) 0:00:51.910 ****** 2025-09-29 06:11:42.336162 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336181 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336197 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336216 | orchestrator | 2025-09-29 06:11:42.336235 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-29 06:11:42.336252 | orchestrator | Monday 29 September 2025 06:10:29 +0000 (0:00:00.381) 0:00:52.292 ****** 2025-09-29 06:11:42.336268 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336284 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336300 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336317 | orchestrator | 2025-09-29 06:11:42.336334 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-29 06:11:42.336350 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:00.246) 0:00:52.538 ****** 2025-09-29 06:11:42.336367 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336384 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336401 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336418 | orchestrator | 2025-09-29 06:11:42.336437 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-29 06:11:42.336455 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:00.279) 0:00:52.817 ****** 2025-09-29 06:11:42.336473 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336489 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336521 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336539 | orchestrator | 2025-09-29 06:11:42.336557 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-29 06:11:42.336588 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:00.251) 0:00:53.069 ****** 2025-09-29 06:11:42.336608 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:11:42.336628 | orchestrator | 2025-09-29 06:11:42.336646 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-29 06:11:42.336666 | orchestrator | Monday 29 September 2025 06:10:31 +0000 (0:00:00.695) 0:00:53.764 ****** 2025-09-29 06:11:42.336719 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.336739 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.336758 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.336775 | orchestrator | 2025-09-29 06:11:42.336794 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-29 06:11:42.336807 | orchestrator | Monday 29 September 2025 06:10:31 +0000 (0:00:00.466) 0:00:54.231 ****** 2025-09-29 06:11:42.336817 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.336828 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.336838 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.336849 | orchestrator | 2025-09-29 06:11:42.336859 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-29 06:11:42.336870 | orchestrator | Monday 29 September 2025 06:10:32 +0000 (0:00:00.664) 0:00:54.896 ****** 2025-09-29 06:11:42.336881 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336891 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336901 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336912 | orchestrator | 2025-09-29 06:11:42.336922 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-29 06:11:42.336933 | orchestrator | Monday 29 September 2025 06:10:33 +0000 (0:00:00.475) 0:00:55.372 ****** 2025-09-29 06:11:42.336944 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.336954 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.336965 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.336975 | orchestrator | 2025-09-29 06:11:42.336986 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-29 06:11:42.336997 | orchestrator | Monday 29 September 2025 06:10:33 +0000 (0:00:00.416) 0:00:55.788 ****** 2025-09-29 06:11:42.337007 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.337017 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.337028 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.337038 | orchestrator | 2025-09-29 06:11:42.337057 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-29 06:11:42.337067 | orchestrator | Monday 29 September 2025 06:10:33 +0000 (0:00:00.322) 0:00:56.110 ****** 2025-09-29 06:11:42.337078 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.337089 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.337099 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.337109 | orchestrator | 2025-09-29 06:11:42.337120 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-29 06:11:42.337130 | orchestrator | Monday 29 September 2025 06:10:34 +0000 (0:00:00.304) 0:00:56.415 ****** 2025-09-29 06:11:42.337141 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.337151 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.337162 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.337172 | orchestrator | 2025-09-29 06:11:42.337183 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-29 06:11:42.337193 | orchestrator | Monday 29 September 2025 06:10:34 +0000 (0:00:00.417) 0:00:56.832 ****** 2025-09-29 06:11:42.337204 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.337214 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.337224 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.337235 | orchestrator | 2025-09-29 06:11:42.337246 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-29 06:11:42.337256 | orchestrator | Monday 29 September 2025 06:10:34 +0000 (0:00:00.286) 0:00:57.119 ****** 2025-09-29 06:11:42.337276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla2025-09-29 06:11:42 | INFO  | Task 84b1325c-3424-4842-a2b1-be39c06d9bc1 is in state SUCCESS 2025-09-29 06:11:42.337347 | orchestrator | _logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337426 | orchestrator | 2025-09-29 06:11:42.337437 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-29 06:11:42.337448 | orchestrator | Monday 29 September 2025 06:10:36 +0000 (0:00:01.378) 0:00:58.498 ****** 2025-09-29 06:11:42.337466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337574 | orchestrator | 2025-09-29 06:11:42.337585 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-29 06:11:42.337596 | orchestrator | Monday 29 September 2025 06:10:40 +0000 (0:00:04.362) 0:01:02.861 ****** 2025-09-29 06:11:42.337645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.337871 | orchestrator | 2025-09-29 06:11:42.337888 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.337904 | orchestrator | Monday 29 September 2025 06:10:42 +0000 (0:00:02.191) 0:01:05.053 ****** 2025-09-29 06:11:42.337935 | orchestrator | 2025-09-29 06:11:42.337953 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.337970 | orchestrator | Monday 29 September 2025 06:10:43 +0000 (0:00:00.352) 0:01:05.406 ****** 2025-09-29 06:11:42.337989 | orchestrator | 2025-09-29 06:11:42.338076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.338094 | orchestrator | Monday 29 September 2025 06:10:43 +0000 (0:00:00.067) 0:01:05.473 ****** 2025-09-29 06:11:42.338105 | orchestrator | 2025-09-29 06:11:42.338115 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-29 06:11:42.338126 | orchestrator | Monday 29 September 2025 06:10:43 +0000 (0:00:00.068) 0:01:05.542 ****** 2025-09-29 06:11:42.338136 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.338145 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.338154 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.338164 | orchestrator | 2025-09-29 06:11:42.338173 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-29 06:11:42.338182 | orchestrator | Monday 29 September 2025 06:10:45 +0000 (0:00:02.374) 0:01:07.916 ****** 2025-09-29 06:11:42.338192 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.338201 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.338211 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.338220 | orchestrator | 2025-09-29 06:11:42.338229 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-29 06:11:42.338239 | orchestrator | Monday 29 September 2025 06:10:52 +0000 (0:00:07.266) 0:01:15.182 ****** 2025-09-29 06:11:42.338248 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.338257 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.338267 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.338276 | orchestrator | 2025-09-29 06:11:42.338285 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-29 06:11:42.338295 | orchestrator | Monday 29 September 2025 06:10:59 +0000 (0:00:06.621) 0:01:21.804 ****** 2025-09-29 06:11:42.338304 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.338313 | orchestrator | 2025-09-29 06:11:42.338323 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-29 06:11:42.338332 | orchestrator | Monday 29 September 2025 06:10:59 +0000 (0:00:00.260) 0:01:22.064 ****** 2025-09-29 06:11:42.338342 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.338351 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.338361 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.338370 | orchestrator | 2025-09-29 06:11:42.338379 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-29 06:11:42.338389 | orchestrator | Monday 29 September 2025 06:11:00 +0000 (0:00:00.840) 0:01:22.905 ****** 2025-09-29 06:11:42.338398 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.338408 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.338417 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.338426 | orchestrator | 2025-09-29 06:11:42.338435 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-29 06:11:42.338445 | orchestrator | Monday 29 September 2025 06:11:01 +0000 (0:00:00.655) 0:01:23.561 ****** 2025-09-29 06:11:42.338454 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.338463 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.338473 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.338482 | orchestrator | 2025-09-29 06:11:42.338491 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-29 06:11:42.338501 | orchestrator | Monday 29 September 2025 06:11:01 +0000 (0:00:00.701) 0:01:24.262 ****** 2025-09-29 06:11:42.338510 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.338519 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.338529 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.338538 | orchestrator | 2025-09-29 06:11:42.338547 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-29 06:11:42.338565 | orchestrator | Monday 29 September 2025 06:11:02 +0000 (0:00:00.655) 0:01:24.918 ****** 2025-09-29 06:11:42.338584 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.338593 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.338603 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.338612 | orchestrator | 2025-09-29 06:11:42.338622 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-29 06:11:42.338632 | orchestrator | Monday 29 September 2025 06:11:03 +0000 (0:00:00.976) 0:01:25.894 ****** 2025-09-29 06:11:42.338641 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.338651 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.338660 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.338670 | orchestrator | 2025-09-29 06:11:42.338680 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-29 06:11:42.338710 | orchestrator | Monday 29 September 2025 06:11:04 +0000 (0:00:00.721) 0:01:26.615 ****** 2025-09-29 06:11:42.338719 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.338729 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.338738 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.338748 | orchestrator | 2025-09-29 06:11:42.338757 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-29 06:11:42.338767 | orchestrator | Monday 29 September 2025 06:11:04 +0000 (0:00:00.289) 0:01:26.904 ****** 2025-09-29 06:11:42.338778 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338788 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338803 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338814 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338825 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338835 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338845 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338878 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338888 | orchestrator | 2025-09-29 06:11:42.338898 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-29 06:11:42.338908 | orchestrator | Monday 29 September 2025 06:11:05 +0000 (0:00:01.397) 0:01:28.302 ****** 2025-09-29 06:11:42.338918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338928 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338938 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.338999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339019 | orchestrator | 2025-09-29 06:11:42.339029 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-29 06:11:42.339038 | orchestrator | Monday 29 September 2025 06:11:12 +0000 (0:00:06.158) 0:01:34.460 ****** 2025-09-29 06:11:42.339055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339065 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339100 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339145 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:11:42.339155 | orchestrator | 2025-09-29 06:11:42.339165 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.339175 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:02.871) 0:01:37.332 ****** 2025-09-29 06:11:42.339184 | orchestrator | 2025-09-29 06:11:42.339194 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.339204 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.060) 0:01:37.392 ****** 2025-09-29 06:11:42.339213 | orchestrator | 2025-09-29 06:11:42.339223 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-29 06:11:42.339232 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.062) 0:01:37.455 ****** 2025-09-29 06:11:42.339242 | orchestrator | 2025-09-29 06:11:42.339251 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-29 06:11:42.339266 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.067) 0:01:37.523 ****** 2025-09-29 06:11:42.339277 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.339286 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.339296 | orchestrator | 2025-09-29 06:11:42.339305 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-29 06:11:42.339315 | orchestrator | Monday 29 September 2025 06:11:21 +0000 (0:00:06.124) 0:01:43.647 ****** 2025-09-29 06:11:42.339325 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.339334 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.339344 | orchestrator | 2025-09-29 06:11:42.339353 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-29 06:11:42.339363 | orchestrator | Monday 29 September 2025 06:11:27 +0000 (0:00:06.120) 0:01:49.768 ****** 2025-09-29 06:11:42.339372 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:11:42.339382 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:11:42.339391 | orchestrator | 2025-09-29 06:11:42.339401 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-29 06:11:42.339410 | orchestrator | Monday 29 September 2025 06:11:34 +0000 (0:00:06.950) 0:01:56.718 ****** 2025-09-29 06:11:42.339420 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:11:42.339429 | orchestrator | 2025-09-29 06:11:42.339439 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-29 06:11:42.339448 | orchestrator | Monday 29 September 2025 06:11:34 +0000 (0:00:00.137) 0:01:56.855 ****** 2025-09-29 06:11:42.339458 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.339467 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.339477 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.339487 | orchestrator | 2025-09-29 06:11:42.339496 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-29 06:11:42.339506 | orchestrator | Monday 29 September 2025 06:11:35 +0000 (0:00:00.778) 0:01:57.634 ****** 2025-09-29 06:11:42.339515 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.339525 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.339534 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.339544 | orchestrator | 2025-09-29 06:11:42.339553 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-29 06:11:42.339570 | orchestrator | Monday 29 September 2025 06:11:36 +0000 (0:00:00.720) 0:01:58.354 ****** 2025-09-29 06:11:42.339587 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.339612 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.339632 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.339648 | orchestrator | 2025-09-29 06:11:42.339664 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-29 06:11:42.339680 | orchestrator | Monday 29 September 2025 06:11:36 +0000 (0:00:00.817) 0:01:59.172 ****** 2025-09-29 06:11:42.339724 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:11:42.339740 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:11:42.339757 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:11:42.339773 | orchestrator | 2025-09-29 06:11:42.339789 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-29 06:11:42.339805 | orchestrator | Monday 29 September 2025 06:11:37 +0000 (0:00:00.699) 0:01:59.871 ****** 2025-09-29 06:11:42.339819 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.339836 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.339846 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.339855 | orchestrator | 2025-09-29 06:11:42.339865 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-29 06:11:42.339874 | orchestrator | Monday 29 September 2025 06:11:38 +0000 (0:00:00.748) 0:02:00.620 ****** 2025-09-29 06:11:42.339883 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:11:42.339892 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:11:42.339902 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:11:42.339911 | orchestrator | 2025-09-29 06:11:42.339920 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:11:42.339930 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-29 06:11:42.339940 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-29 06:11:42.339950 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-29 06:11:42.339959 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:11:42.339969 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:11:42.339978 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:11:42.339987 | orchestrator | 2025-09-29 06:11:42.339997 | orchestrator | 2025-09-29 06:11:42.340006 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:11:42.340016 | orchestrator | Monday 29 September 2025 06:11:39 +0000 (0:00:01.067) 0:02:01.687 ****** 2025-09-29 06:11:42.340025 | orchestrator | =============================================================================== 2025-09-29 06:11:42.340034 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.56s 2025-09-29 06:11:42.340044 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.57s 2025-09-29 06:11:42.340054 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.39s 2025-09-29 06:11:42.340063 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.50s 2025-09-29 06:11:42.340073 | orchestrator | ovn-controller : Restart ovn-controller container ----------------------- 8.40s 2025-09-29 06:11:42.340091 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.16s 2025-09-29 06:11:42.340101 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.36s 2025-09-29 06:11:42.340110 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.87s 2025-09-29 06:11:42.340133 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.41s 2025-09-29 06:11:42.340143 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.36s 2025-09-29 06:11:42.340152 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.19s 2025-09-29 06:11:42.340162 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.97s 2025-09-29 06:11:42.340171 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.57s 2025-09-29 06:11:42.340180 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.45s 2025-09-29 06:11:42.340189 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.42s 2025-09-29 06:11:42.340199 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-09-29 06:11:42.340208 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-09-29 06:11:42.340217 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.28s 2025-09-29 06:11:42.340227 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.19s 2025-09-29 06:11:42.340236 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.13s 2025-09-29 06:11:42.340245 | orchestrator | 2025-09-29 06:11:42 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:42.340255 | orchestrator | 2025-09-29 06:11:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:45.373013 | orchestrator | 2025-09-29 06:11:45 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:45.373803 | orchestrator | 2025-09-29 06:11:45 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:45.373840 | orchestrator | 2025-09-29 06:11:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:48.415831 | orchestrator | 2025-09-29 06:11:48 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:48.417236 | orchestrator | 2025-09-29 06:11:48 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:48.417497 | orchestrator | 2025-09-29 06:11:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:51.459127 | orchestrator | 2025-09-29 06:11:51 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:51.460321 | orchestrator | 2025-09-29 06:11:51 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:51.460363 | orchestrator | 2025-09-29 06:11:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:54.504102 | orchestrator | 2025-09-29 06:11:54 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:54.505203 | orchestrator | 2025-09-29 06:11:54 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:54.505245 | orchestrator | 2025-09-29 06:11:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:11:57.566090 | orchestrator | 2025-09-29 06:11:57 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:11:57.566475 | orchestrator | 2025-09-29 06:11:57 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:11:57.567316 | orchestrator | 2025-09-29 06:11:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:00.614657 | orchestrator | 2025-09-29 06:12:00 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:00.616855 | orchestrator | 2025-09-29 06:12:00 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:00.616905 | orchestrator | 2025-09-29 06:12:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:03.659056 | orchestrator | 2025-09-29 06:12:03 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:03.661081 | orchestrator | 2025-09-29 06:12:03 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:03.661223 | orchestrator | 2025-09-29 06:12:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:06.705223 | orchestrator | 2025-09-29 06:12:06 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:06.707848 | orchestrator | 2025-09-29 06:12:06 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:06.707923 | orchestrator | 2025-09-29 06:12:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:09.743424 | orchestrator | 2025-09-29 06:12:09 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:09.744586 | orchestrator | 2025-09-29 06:12:09 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:09.744831 | orchestrator | 2025-09-29 06:12:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:12.780755 | orchestrator | 2025-09-29 06:12:12 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:12.783813 | orchestrator | 2025-09-29 06:12:12 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:12.783879 | orchestrator | 2025-09-29 06:12:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:15.826989 | orchestrator | 2025-09-29 06:12:15 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:15.829748 | orchestrator | 2025-09-29 06:12:15 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:15.829805 | orchestrator | 2025-09-29 06:12:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:18.870919 | orchestrator | 2025-09-29 06:12:18 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:18.872602 | orchestrator | 2025-09-29 06:12:18 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:18.872910 | orchestrator | 2025-09-29 06:12:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:21.920788 | orchestrator | 2025-09-29 06:12:21 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:21.921373 | orchestrator | 2025-09-29 06:12:21 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:21.921913 | orchestrator | 2025-09-29 06:12:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:24.960017 | orchestrator | 2025-09-29 06:12:24 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:24.962936 | orchestrator | 2025-09-29 06:12:24 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:24.963113 | orchestrator | 2025-09-29 06:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:28.004195 | orchestrator | 2025-09-29 06:12:28 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:28.004280 | orchestrator | 2025-09-29 06:12:28 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:28.004295 | orchestrator | 2025-09-29 06:12:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:31.032176 | orchestrator | 2025-09-29 06:12:31 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:31.032733 | orchestrator | 2025-09-29 06:12:31 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:31.032769 | orchestrator | 2025-09-29 06:12:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:34.091998 | orchestrator | 2025-09-29 06:12:34 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:34.092085 | orchestrator | 2025-09-29 06:12:34 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:34.092099 | orchestrator | 2025-09-29 06:12:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:37.103983 | orchestrator | 2025-09-29 06:12:37 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:37.106002 | orchestrator | 2025-09-29 06:12:37 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:37.106185 | orchestrator | 2025-09-29 06:12:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:40.134061 | orchestrator | 2025-09-29 06:12:40 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:40.136031 | orchestrator | 2025-09-29 06:12:40 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:40.136688 | orchestrator | 2025-09-29 06:12:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:43.164618 | orchestrator | 2025-09-29 06:12:43 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:43.164963 | orchestrator | 2025-09-29 06:12:43 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:43.165043 | orchestrator | 2025-09-29 06:12:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:46.201454 | orchestrator | 2025-09-29 06:12:46 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:46.203108 | orchestrator | 2025-09-29 06:12:46 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:46.203144 | orchestrator | 2025-09-29 06:12:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:49.248184 | orchestrator | 2025-09-29 06:12:49 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:49.248273 | orchestrator | 2025-09-29 06:12:49 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:49.248288 | orchestrator | 2025-09-29 06:12:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:52.284228 | orchestrator | 2025-09-29 06:12:52 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:52.285079 | orchestrator | 2025-09-29 06:12:52 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:52.285158 | orchestrator | 2025-09-29 06:12:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:55.335045 | orchestrator | 2025-09-29 06:12:55 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:55.336081 | orchestrator | 2025-09-29 06:12:55 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:55.336106 | orchestrator | 2025-09-29 06:12:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:12:58.390316 | orchestrator | 2025-09-29 06:12:58 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:12:58.392474 | orchestrator | 2025-09-29 06:12:58 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:12:58.392973 | orchestrator | 2025-09-29 06:12:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:01.440788 | orchestrator | 2025-09-29 06:13:01 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:01.442514 | orchestrator | 2025-09-29 06:13:01 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:01.442552 | orchestrator | 2025-09-29 06:13:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:04.483057 | orchestrator | 2025-09-29 06:13:04 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:04.484173 | orchestrator | 2025-09-29 06:13:04 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:04.484201 | orchestrator | 2025-09-29 06:13:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:07.523909 | orchestrator | 2025-09-29 06:13:07 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:07.525219 | orchestrator | 2025-09-29 06:13:07 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:07.525449 | orchestrator | 2025-09-29 06:13:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:10.567010 | orchestrator | 2025-09-29 06:13:10 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:10.569784 | orchestrator | 2025-09-29 06:13:10 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:10.569852 | orchestrator | 2025-09-29 06:13:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:13.610909 | orchestrator | 2025-09-29 06:13:13 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:13.611030 | orchestrator | 2025-09-29 06:13:13 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:13.611045 | orchestrator | 2025-09-29 06:13:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:16.654305 | orchestrator | 2025-09-29 06:13:16 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:16.655883 | orchestrator | 2025-09-29 06:13:16 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:16.656872 | orchestrator | 2025-09-29 06:13:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:19.692529 | orchestrator | 2025-09-29 06:13:19 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:19.693791 | orchestrator | 2025-09-29 06:13:19 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:19.693833 | orchestrator | 2025-09-29 06:13:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:22.735222 | orchestrator | 2025-09-29 06:13:22 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:22.737235 | orchestrator | 2025-09-29 06:13:22 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:22.737286 | orchestrator | 2025-09-29 06:13:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:25.784780 | orchestrator | 2025-09-29 06:13:25 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:25.786389 | orchestrator | 2025-09-29 06:13:25 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:25.786448 | orchestrator | 2025-09-29 06:13:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:28.836812 | orchestrator | 2025-09-29 06:13:28 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:28.837889 | orchestrator | 2025-09-29 06:13:28 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:28.837921 | orchestrator | 2025-09-29 06:13:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:31.876470 | orchestrator | 2025-09-29 06:13:31 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:31.878395 | orchestrator | 2025-09-29 06:13:31 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:31.878443 | orchestrator | 2025-09-29 06:13:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:34.920504 | orchestrator | 2025-09-29 06:13:34 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:34.922336 | orchestrator | 2025-09-29 06:13:34 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:34.922373 | orchestrator | 2025-09-29 06:13:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:37.958493 | orchestrator | 2025-09-29 06:13:37 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:37.958613 | orchestrator | 2025-09-29 06:13:37 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:37.958698 | orchestrator | 2025-09-29 06:13:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:40.993200 | orchestrator | 2025-09-29 06:13:40 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:40.994070 | orchestrator | 2025-09-29 06:13:40 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:40.994105 | orchestrator | 2025-09-29 06:13:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:44.046465 | orchestrator | 2025-09-29 06:13:44 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:44.048816 | orchestrator | 2025-09-29 06:13:44 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:44.048902 | orchestrator | 2025-09-29 06:13:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:47.087987 | orchestrator | 2025-09-29 06:13:47 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:47.089362 | orchestrator | 2025-09-29 06:13:47 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:47.089417 | orchestrator | 2025-09-29 06:13:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:50.131993 | orchestrator | 2025-09-29 06:13:50 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:50.133276 | orchestrator | 2025-09-29 06:13:50 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:50.133506 | orchestrator | 2025-09-29 06:13:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:53.173197 | orchestrator | 2025-09-29 06:13:53 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:53.175107 | orchestrator | 2025-09-29 06:13:53 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:53.175238 | orchestrator | 2025-09-29 06:13:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:56.217378 | orchestrator | 2025-09-29 06:13:56 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:56.218120 | orchestrator | 2025-09-29 06:13:56 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:56.218389 | orchestrator | 2025-09-29 06:13:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:13:59.269546 | orchestrator | 2025-09-29 06:13:59 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:13:59.270588 | orchestrator | 2025-09-29 06:13:59 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:13:59.270673 | orchestrator | 2025-09-29 06:13:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:02.314095 | orchestrator | 2025-09-29 06:14:02 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:02.316061 | orchestrator | 2025-09-29 06:14:02 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:02.316131 | orchestrator | 2025-09-29 06:14:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:05.346541 | orchestrator | 2025-09-29 06:14:05 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:05.349894 | orchestrator | 2025-09-29 06:14:05 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:05.349919 | orchestrator | 2025-09-29 06:14:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:08.396962 | orchestrator | 2025-09-29 06:14:08 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:08.397609 | orchestrator | 2025-09-29 06:14:08 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:08.397694 | orchestrator | 2025-09-29 06:14:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:11.446976 | orchestrator | 2025-09-29 06:14:11 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:11.447230 | orchestrator | 2025-09-29 06:14:11 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:11.447256 | orchestrator | 2025-09-29 06:14:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:14.495294 | orchestrator | 2025-09-29 06:14:14 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:14.497176 | orchestrator | 2025-09-29 06:14:14 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:14.497862 | orchestrator | 2025-09-29 06:14:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:17.540814 | orchestrator | 2025-09-29 06:14:17 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:17.540932 | orchestrator | 2025-09-29 06:14:17 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:17.540958 | orchestrator | 2025-09-29 06:14:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:20.573730 | orchestrator | 2025-09-29 06:14:20 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:20.574287 | orchestrator | 2025-09-29 06:14:20 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:20.574513 | orchestrator | 2025-09-29 06:14:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:23.615668 | orchestrator | 2025-09-29 06:14:23 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:23.616752 | orchestrator | 2025-09-29 06:14:23 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:23.616795 | orchestrator | 2025-09-29 06:14:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:26.662277 | orchestrator | 2025-09-29 06:14:26 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state STARTED 2025-09-29 06:14:26.663678 | orchestrator | 2025-09-29 06:14:26 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:26.664197 | orchestrator | 2025-09-29 06:14:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:29.722820 | orchestrator | 2025-09-29 06:14:29.723710 | orchestrator | 2025-09-29 06:14:29.723749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:14:29.723797 | orchestrator | 2025-09-29 06:14:29.723817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:14:29.723838 | orchestrator | Monday 29 September 2025 06:08:29 +0000 (0:00:00.256) 0:00:00.256 ****** 2025-09-29 06:14:29.723859 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.723879 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.723900 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.723920 | orchestrator | 2025-09-29 06:14:29.723940 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:14:29.723961 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.288) 0:00:00.544 ****** 2025-09-29 06:14:29.723981 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-29 06:14:29.724002 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-29 06:14:29.724022 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-29 06:14:29.724042 | orchestrator | 2025-09-29 06:14:29.724062 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-29 06:14:29.724082 | orchestrator | 2025-09-29 06:14:29.724101 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-29 06:14:29.724119 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:00.443) 0:00:00.987 ****** 2025-09-29 06:14:29.724138 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.724157 | orchestrator | 2025-09-29 06:14:29.724177 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-29 06:14:29.724197 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.885) 0:00:01.872 ****** 2025-09-29 06:14:29.724802 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.724851 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.724871 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.724892 | orchestrator | 2025-09-29 06:14:29.724912 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-29 06:14:29.724932 | orchestrator | Monday 29 September 2025 06:08:32 +0000 (0:00:00.817) 0:00:02.690 ****** 2025-09-29 06:14:29.724951 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.725406 | orchestrator | 2025-09-29 06:14:29.725461 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-29 06:14:29.725481 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:00.730) 0:00:03.421 ****** 2025-09-29 06:14:29.725500 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.725519 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.725537 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.725556 | orchestrator | 2025-09-29 06:14:29.725575 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-29 06:14:29.725594 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:00.804) 0:00:04.225 ****** 2025-09-29 06:14:29.725688 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.725710 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.725730 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.725749 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.726646 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.726694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-29 06:14:29.726713 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-29 06:14:29.726747 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-29 06:14:29.726766 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-29 06:14:29.726816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-29 06:14:29.726836 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-29 06:14:29.726856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-29 06:14:29.726875 | orchestrator | 2025-09-29 06:14:29.726894 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-29 06:14:29.726910 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:02.649) 0:00:06.874 ****** 2025-09-29 06:14:29.726929 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-29 06:14:29.726947 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-29 06:14:29.726965 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-29 06:14:29.726983 | orchestrator | 2025-09-29 06:14:29.727002 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-29 06:14:29.727019 | orchestrator | Monday 29 September 2025 06:08:37 +0000 (0:00:00.977) 0:00:07.852 ****** 2025-09-29 06:14:29.727037 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-29 06:14:29.727055 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-29 06:14:29.727072 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-29 06:14:29.727090 | orchestrator | 2025-09-29 06:14:29.727107 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-29 06:14:29.727125 | orchestrator | Monday 29 September 2025 06:08:38 +0000 (0:00:01.383) 0:00:09.235 ****** 2025-09-29 06:14:29.727142 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-29 06:14:29.727161 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.727202 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-29 06:14:29.727222 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.727241 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-29 06:14:29.727260 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.727278 | orchestrator | 2025-09-29 06:14:29.727296 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-29 06:14:29.727315 | orchestrator | Monday 29 September 2025 06:08:40 +0000 (0:00:01.359) 0:00:10.595 ****** 2025-09-29 06:14:29.727337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.727489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.727507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.727525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.727543 | orchestrator | 2025-09-29 06:14:29.727561 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-29 06:14:29.727578 | orchestrator | Monday 29 September 2025 06:08:42 +0000 (0:00:02.478) 0:00:13.073 ****** 2025-09-29 06:14:29.727625 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.727643 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.728127 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.728184 | orchestrator | 2025-09-29 06:14:29.728195 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-29 06:14:29.728205 | orchestrator | Monday 29 September 2025 06:08:43 +0000 (0:00:00.996) 0:00:14.070 ****** 2025-09-29 06:14:29.728214 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-29 06:14:29.728224 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-29 06:14:29.728234 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-29 06:14:29.728243 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-29 06:14:29.728252 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-29 06:14:29.728262 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-29 06:14:29.728271 | orchestrator | 2025-09-29 06:14:29.728280 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-29 06:14:29.728290 | orchestrator | Monday 29 September 2025 06:08:46 +0000 (0:00:02.468) 0:00:16.538 ****** 2025-09-29 06:14:29.728300 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.728309 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.728318 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.728327 | orchestrator | 2025-09-29 06:14:29.728337 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-29 06:14:29.728353 | orchestrator | Monday 29 September 2025 06:08:47 +0000 (0:00:01.378) 0:00:17.916 ****** 2025-09-29 06:14:29.728363 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.728372 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.728382 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.728391 | orchestrator | 2025-09-29 06:14:29.728401 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-29 06:14:29.728410 | orchestrator | Monday 29 September 2025 06:08:49 +0000 (0:00:01.840) 0:00:19.756 ****** 2025-09-29 06:14:29.728421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.728442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.728453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728482 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.728492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.728503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.728516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728537 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.728556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.728570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.728586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728628 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.728639 | orchestrator | 2025-09-29 06:14:29.728649 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-29 06:14:29.728659 | orchestrator | Monday 29 September 2025 06:08:50 +0000 (0:00:01.296) 0:00:21.053 ****** 2025-09-29 06:14:29.728673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.728944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.728956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150', '__omit_place_holder__c6731da46544f2864c6a0ba57211db74ed48a150'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-29 06:14:29.728965 | orchestrator | 2025-09-29 06:14:29.728980 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-29 06:14:29.728996 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:03.063) 0:00:24.116 ****** 2025-09-29 06:14:29.729028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729161 | orchestrator | 2025-09-29 06:14:29.729170 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-29 06:14:29.729180 | orchestrator | Monday 29 September 2025 06:08:57 +0000 (0:00:03.600) 0:00:27.717 ****** 2025-09-29 06:14:29.729189 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-29 06:14:29.729199 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-29 06:14:29.729208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-29 06:14:29.729224 | orchestrator | 2025-09-29 06:14:29.729233 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-29 06:14:29.729243 | orchestrator | Monday 29 September 2025 06:08:59 +0000 (0:00:02.458) 0:00:30.176 ****** 2025-09-29 06:14:29.729252 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-29 06:14:29.729261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-29 06:14:29.729271 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-29 06:14:29.729280 | orchestrator | 2025-09-29 06:14:29.729307 | orchestrator | T2025-09-29 06:14:29 | INFO  | Task d9154634-b2c9-4cbe-8e4c-7b0c13bd3852 is in state SUCCESS 2025-09-29 06:14:29.729325 | orchestrator | ASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-29 06:14:29.729342 | orchestrator | Monday 29 September 2025 06:09:05 +0000 (0:00:06.062) 0:00:36.238 ****** 2025-09-29 06:14:29.729386 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.729398 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.729407 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.729416 | orchestrator | 2025-09-29 06:14:29.729426 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-29 06:14:29.729435 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:00.935) 0:00:37.173 ****** 2025-09-29 06:14:29.729445 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-29 06:14:29.729454 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-29 06:14:29.729464 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-29 06:14:29.729473 | orchestrator | 2025-09-29 06:14:29.729482 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-29 06:14:29.729491 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:03.122) 0:00:40.296 ****** 2025-09-29 06:14:29.729501 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-29 06:14:29.729510 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-29 06:14:29.729519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-29 06:14:29.729529 | orchestrator | 2025-09-29 06:14:29.729539 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-29 06:14:29.729548 | orchestrator | Monday 29 September 2025 06:09:13 +0000 (0:00:03.142) 0:00:43.438 ****** 2025-09-29 06:14:29.729557 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-29 06:14:29.729566 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-29 06:14:29.729576 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-29 06:14:29.729585 | orchestrator | 2025-09-29 06:14:29.729594 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-29 06:14:29.729636 | orchestrator | Monday 29 September 2025 06:09:14 +0000 (0:00:01.786) 0:00:45.225 ****** 2025-09-29 06:14:29.729647 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-29 06:14:29.729657 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-29 06:14:29.729666 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-29 06:14:29.729676 | orchestrator | 2025-09-29 06:14:29.729685 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-29 06:14:29.729694 | orchestrator | Monday 29 September 2025 06:09:16 +0000 (0:00:01.827) 0:00:47.053 ****** 2025-09-29 06:14:29.729704 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.729721 | orchestrator | 2025-09-29 06:14:29.729730 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-29 06:14:29.729739 | orchestrator | Monday 29 September 2025 06:09:17 +0000 (0:00:00.640) 0:00:47.693 ****** 2025-09-29 06:14:29.729755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.729834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.729864 | orchestrator | 2025-09-29 06:14:29.729873 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-29 06:14:29.729883 | orchestrator | Monday 29 September 2025 06:09:20 +0000 (0:00:03.398) 0:00:51.092 ****** 2025-09-29 06:14:29.729899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.729910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.729920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.729929 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.729939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.729959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.729969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.729979 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.729995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730058 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.730068 | orchestrator | 2025-09-29 06:14:29.730078 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-29 06:14:29.730087 | orchestrator | Monday 29 September 2025 06:09:22 +0000 (0:00:01.482) 0:00:52.574 ****** 2025-09-29 06:14:29.730097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730137 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.730147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730185 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.730195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730234 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.730244 | orchestrator | 2025-09-29 06:14:29.730254 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-29 06:14:29.730263 | orchestrator | Monday 29 September 2025 06:09:25 +0000 (0:00:02.943) 0:00:55.518 ****** 2025-09-29 06:14:29.730273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730309 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.730322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730403 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.730426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730474 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.730484 | orchestrator | 2025-09-29 06:14:29.730493 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-29 06:14:29.730503 | orchestrator | Monday 29 September 2025 06:09:27 +0000 (0:00:02.007) 0:00:57.525 ****** 2025-09-29 06:14:29.730513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730551 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.730564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730595 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.730635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730678 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.730688 | orchestrator | 2025-09-29 06:14:29.730697 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-29 06:14:29.730707 | orchestrator | Monday 29 September 2025 06:09:27 +0000 (0:00:00.657) 0:00:58.182 ****** 2025-09-29 06:14:29.730717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730751 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.730768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730803 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.730813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730848 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.730857 | orchestrator | 2025-09-29 06:14:29.730867 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-29 06:14:29.730877 | orchestrator | Monday 29 September 2025 06:09:28 +0000 (0:00:00.734) 0:00:58.917 ****** 2025-09-29 06:14:29.730886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.730917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.730936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.730953 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.730993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731033 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.731042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731085 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.731095 | orchestrator | 2025-09-29 06:14:29.731105 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-29 06:14:29.731114 | orchestrator | Monday 29 September 2025 06:09:30 +0000 (0:00:01.628) 0:01:00.546 ****** 2025-09-29 06:14:29.731124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731154 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.731169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731211 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.731221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731251 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.731260 | orchestrator | 2025-09-29 06:14:29.731270 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-29 06:14:29.731280 | orchestrator | Monday 29 September 2025 06:09:30 +0000 (0:00:00.659) 0:01:01.205 ****** 2025-09-29 06:14:29.731293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731334 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.731344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731374 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.731384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-29 06:14:29.731398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-29 06:14:29.731417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-29 06:14:29.731426 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.731436 | orchestrator | 2025-09-29 06:14:29.731446 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-29 06:14:29.731455 | orchestrator | Monday 29 September 2025 06:09:31 +0000 (0:00:00.812) 0:01:02.017 ****** 2025-09-29 06:14:29.731465 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-29 06:14:29.731480 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-29 06:14:29.731490 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-29 06:14:29.731499 | orchestrator | 2025-09-29 06:14:29.731509 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-29 06:14:29.731518 | orchestrator | Monday 29 September 2025 06:09:33 +0000 (0:00:01.698) 0:01:03.716 ****** 2025-09-29 06:14:29.731528 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-29 06:14:29.731537 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-29 06:14:29.731547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-29 06:14:29.731556 | orchestrator | 2025-09-29 06:14:29.731566 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-29 06:14:29.731576 | orchestrator | Monday 29 September 2025 06:09:35 +0000 (0:00:01.730) 0:01:05.446 ****** 2025-09-29 06:14:29.731585 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:14:29.731594 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:14:29.731622 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:14:29.731633 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.731642 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:14:29.731652 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:14:29.731661 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.731670 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:14:29.731680 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.731689 | orchestrator | 2025-09-29 06:14:29.731698 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-29 06:14:29.731708 | orchestrator | Monday 29 September 2025 06:09:36 +0000 (0:00:01.153) 0:01:06.599 ****** 2025-09-29 06:14:29.731718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-29 06:14:29.731795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.731810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.731824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-29 06:14:29.731834 | orchestrator | 2025-09-29 06:14:29.731844 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-29 06:14:29.731853 | orchestrator | Monday 29 September 2025 06:09:39 +0000 (0:00:03.590) 0:01:10.189 ****** 2025-09-29 06:14:29.731862 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.731872 | orchestrator | 2025-09-29 06:14:29.731881 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-29 06:14:29.731891 | orchestrator | Monday 29 September 2025 06:09:40 +0000 (0:00:01.080) 0:01:11.270 ****** 2025-09-29 06:14:29.731906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-29 06:14:29.731918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.731928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.731938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.731953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-29 06:14:29.731963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.731973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-29 06:14:29.732031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.732041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732064 | orchestrator | 2025-09-29 06:14:29.732074 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-29 06:14:29.732083 | orchestrator | Monday 29 September 2025 06:09:46 +0000 (0:00:05.141) 0:01:16.412 ****** 2025-09-29 06:14:29.732093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-29 06:14:29.732109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.732119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-29 06:14:29.732144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732154 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.732168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.732178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732203 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.732213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-29 06:14:29.732227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.732238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732257 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.732267 | orchestrator | 2025-09-29 06:14:29.732280 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-29 06:14:29.732290 | orchestrator | Monday 29 September 2025 06:09:47 +0000 (0:00:00.900) 0:01:17.312 ****** 2025-09-29 06:14:29.732300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732319 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.732329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732348 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.732357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-29 06:14:29.732382 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.732392 | orchestrator | 2025-09-29 06:14:29.732401 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-29 06:14:29.732416 | orchestrator | Monday 29 September 2025 06:09:48 +0000 (0:00:01.129) 0:01:18.442 ****** 2025-09-29 06:14:29.732426 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.732435 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.732445 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.732454 | orchestrator | 2025-09-29 06:14:29.732464 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-29 06:14:29.732473 | orchestrator | Monday 29 September 2025 06:09:49 +0000 (0:00:01.184) 0:01:19.626 ****** 2025-09-29 06:14:29.732482 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.732492 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.732501 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.732510 | orchestrator | 2025-09-29 06:14:29.732520 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-29 06:14:29.732529 | orchestrator | Monday 29 September 2025 06:09:51 +0000 (0:00:01.854) 0:01:21.480 ****** 2025-09-29 06:14:29.732538 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.732548 | orchestrator | 2025-09-29 06:14:29.732557 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-29 06:14:29.732567 | orchestrator | Monday 29 September 2025 06:09:51 +0000 (0:00:00.695) 0:01:22.175 ****** 2025-09-29 06:14:29.732577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.732592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.732700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.732746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732801 | orchestrator | 2025-09-29 06:14:29.732817 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-29 06:14:29.732827 | orchestrator | Monday 29 September 2025 06:09:55 +0000 (0:00:04.110) 0:01:26.286 ****** 2025-09-29 06:14:29.732843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.732854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.732874 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.733098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.733120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.733130 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.733150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.733167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.733177 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733187 | orchestrator | 2025-09-29 06:14:29.733197 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-29 06:14:29.733206 | orchestrator | Monday 29 September 2025 06:09:56 +0000 (0:00:00.591) 0:01:26.878 ****** 2025-09-29 06:14:29.733216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733235 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733267 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-29 06:14:29.733290 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733298 | orchestrator | 2025-09-29 06:14:29.733306 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-29 06:14:29.733313 | orchestrator | Monday 29 September 2025 06:09:57 +0000 (0:00:00.847) 0:01:27.725 ****** 2025-09-29 06:14:29.733321 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.733329 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.733336 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.733344 | orchestrator | 2025-09-29 06:14:29.733352 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-29 06:14:29.733359 | orchestrator | Monday 29 September 2025 06:09:58 +0000 (0:00:01.284) 0:01:29.010 ****** 2025-09-29 06:14:29.733367 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.733375 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.733382 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.733390 | orchestrator | 2025-09-29 06:14:29.733397 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-29 06:14:29.733410 | orchestrator | Monday 29 September 2025 06:10:00 +0000 (0:00:02.016) 0:01:31.027 ****** 2025-09-29 06:14:29.733422 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733436 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733449 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733482 | orchestrator | 2025-09-29 06:14:29.733494 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-29 06:14:29.733502 | orchestrator | Monday 29 September 2025 06:10:01 +0000 (0:00:00.348) 0:01:31.375 ****** 2025-09-29 06:14:29.733509 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.733517 | orchestrator | 2025-09-29 06:14:29.733525 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-29 06:14:29.733532 | orchestrator | Monday 29 September 2025 06:10:01 +0000 (0:00:00.853) 0:01:32.229 ****** 2025-09-29 06:14:29.733541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-29 06:14:29.733556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-29 06:14:29.733575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-29 06:14:29.733583 | orchestrator | 2025-09-29 06:14:29.733591 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-29 06:14:29.733599 | orchestrator | Monday 29 September 2025 06:10:04 +0000 (0:00:02.441) 0:01:34.671 ****** 2025-09-29 06:14:29.733625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-29 06:14:29.733635 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-29 06:14:29.733654 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-29 06:14:29.733681 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733691 | orchestrator | 2025-09-29 06:14:29.733700 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-29 06:14:29.733709 | orchestrator | Monday 29 September 2025 06:10:06 +0000 (0:00:01.670) 0:01:36.341 ****** 2025-09-29 06:14:29.733718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733742 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733771 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-29 06:14:29.733799 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733808 | orchestrator | 2025-09-29 06:14:29.733816 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-29 06:14:29.733825 | orchestrator | Monday 29 September 2025 06:10:07 +0000 (0:00:01.672) 0:01:38.014 ****** 2025-09-29 06:14:29.733834 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733843 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733851 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733860 | orchestrator | 2025-09-29 06:14:29.733868 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-29 06:14:29.733882 | orchestrator | Monday 29 September 2025 06:10:08 +0000 (0:00:00.681) 0:01:38.696 ****** 2025-09-29 06:14:29.733890 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.733897 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.733905 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.733912 | orchestrator | 2025-09-29 06:14:29.733920 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-29 06:14:29.733928 | orchestrator | Monday 29 September 2025 06:10:09 +0000 (0:00:01.216) 0:01:39.913 ****** 2025-09-29 06:14:29.733935 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.733943 | orchestrator | 2025-09-29 06:14:29.733951 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-29 06:14:29.733958 | orchestrator | Monday 29 September 2025 06:10:10 +0000 (0:00:00.746) 0:01:40.659 ****** 2025-09-29 06:14:29.733971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.733983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.733992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.734100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734137 | orchestrator | 2025-09-29 06:14:29.734145 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-29 06:14:29.734153 | orchestrator | Monday 29 September 2025 06:10:14 +0000 (0:00:03.759) 0:01:44.419 ****** 2025-09-29 06:14:29.734165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.734173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734203 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.734233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.734245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734275 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.734284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.734296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734324 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.734332 | orchestrator | 2025-09-29 06:14:29.734340 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-29 06:14:29.734349 | orchestrator | Monday 29 September 2025 06:10:14 +0000 (0:00:00.763) 0:01:45.183 ****** 2025-09-29 06:14:29.734366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734394 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.734423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734446 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.734454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-29 06:14:29.734470 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.734477 | orchestrator | 2025-09-29 06:14:29.734485 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-29 06:14:29.734493 | orchestrator | Monday 29 September 2025 06:10:15 +0000 (0:00:00.924) 0:01:46.107 ****** 2025-09-29 06:14:29.734500 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.734508 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.734516 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.734523 | orchestrator | 2025-09-29 06:14:29.734530 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-29 06:14:29.734538 | orchestrator | Monday 29 September 2025 06:10:17 +0000 (0:00:01.361) 0:01:47.468 ****** 2025-09-29 06:14:29.734546 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.734553 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.734561 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.734568 | orchestrator | 2025-09-29 06:14:29.734576 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-29 06:14:29.734584 | orchestrator | Monday 29 September 2025 06:10:19 +0000 (0:00:01.934) 0:01:49.403 ****** 2025-09-29 06:14:29.734619 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.734632 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.734639 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.734647 | orchestrator | 2025-09-29 06:14:29.734654 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-29 06:14:29.734662 | orchestrator | Monday 29 September 2025 06:10:19 +0000 (0:00:00.378) 0:01:49.781 ****** 2025-09-29 06:14:29.734669 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.734677 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.734684 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.734692 | orchestrator | 2025-09-29 06:14:29.734699 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-29 06:14:29.734707 | orchestrator | Monday 29 September 2025 06:10:19 +0000 (0:00:00.270) 0:01:50.052 ****** 2025-09-29 06:14:29.734714 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.734722 | orchestrator | 2025-09-29 06:14:29.734730 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-29 06:14:29.734743 | orchestrator | Monday 29 September 2025 06:10:20 +0000 (0:00:00.718) 0:01:50.771 ****** 2025-09-29 06:14:29.734757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:14:29.734767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:14:29.734775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.734784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.734825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:14:29.734959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.734967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.734988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735022 | orchestrator | 2025-09-29 06:14:29.735030 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-29 06:14:29.735038 | orchestrator | Monday 29 September 2025 06:10:24 +0000 (0:00:03.582) 0:01:54.354 ****** 2025-09-29 06:14:29.735046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:14:29.735054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.735066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:14:29.735094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.735103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:14:29.735155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:14:29.735179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735226 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.735237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735254 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.735262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.735286 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.735294 | orchestrator | 2025-09-29 06:14:29.735302 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-29 06:14:29.735314 | orchestrator | Monday 29 September 2025 06:10:24 +0000 (0:00:00.852) 0:01:55.206 ****** 2025-09-29 06:14:29.735322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735341 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.735354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735397 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.735410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-29 06:14:29.735436 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.735449 | orchestrator | 2025-09-29 06:14:29.735469 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-29 06:14:29.735483 | orchestrator | Monday 29 September 2025 06:10:25 +0000 (0:00:01.025) 0:01:56.232 ****** 2025-09-29 06:14:29.735496 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.735504 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.735512 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.735519 | orchestrator | 2025-09-29 06:14:29.735527 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-29 06:14:29.735535 | orchestrator | Monday 29 September 2025 06:10:27 +0000 (0:00:01.998) 0:01:58.230 ****** 2025-09-29 06:14:29.735543 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.735550 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.735558 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.735566 | orchestrator | 2025-09-29 06:14:29.735573 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-29 06:14:29.735581 | orchestrator | Monday 29 September 2025 06:10:29 +0000 (0:00:01.751) 0:01:59.981 ****** 2025-09-29 06:14:29.735588 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.735596 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.735665 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.735676 | orchestrator | 2025-09-29 06:14:29.735684 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-29 06:14:29.735691 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:00.388) 0:02:00.370 ****** 2025-09-29 06:14:29.735699 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.735707 | orchestrator | 2025-09-29 06:14:29.735714 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-29 06:14:29.735720 | orchestrator | Monday 29 September 2025 06:10:30 +0000 (0:00:00.779) 0:02:01.150 ****** 2025-09-29 06:14:29.735729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:14:29.735756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:14:29.735786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:14:29.735811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735819 | orchestrator | 2025-09-29 06:14:29.735826 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-29 06:14:29.735833 | orchestrator | Monday 29 September 2025 06:10:34 +0000 (0:00:04.092) 0:02:05.242 ****** 2025-09-29 06:14:29.735843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:14:29.735858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735866 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.735878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:14:29.735886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735898 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.735913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:14:29.735922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.735933 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.735940 | orchestrator | 2025-09-29 06:14:29.735946 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-29 06:14:29.735953 | orchestrator | Monday 29 September 2025 06:10:38 +0000 (0:00:03.485) 0:02:08.727 ****** 2025-09-29 06:14:29.735960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.735971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.735978 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.735988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.735995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.736002 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.736009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.736019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-29 06:14:29.736026 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.736033 | orchestrator | 2025-09-29 06:14:29.736039 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-29 06:14:29.736046 | orchestrator | Monday 29 September 2025 06:10:42 +0000 (0:00:03.786) 0:02:12.513 ****** 2025-09-29 06:14:29.736053 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.736059 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.736066 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.736073 | orchestrator | 2025-09-29 06:14:29.736079 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-29 06:14:29.736086 | orchestrator | Monday 29 September 2025 06:10:43 +0000 (0:00:01.338) 0:02:13.852 ****** 2025-09-29 06:14:29.736092 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.736099 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.736106 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.736118 | orchestrator | 2025-09-29 06:14:29.736130 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-29 06:14:29.736141 | orchestrator | Monday 29 September 2025 06:10:45 +0000 (0:00:02.078) 0:02:15.930 ****** 2025-09-29 06:14:29.736153 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.736165 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.736177 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.736185 | orchestrator | 2025-09-29 06:14:29.736192 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-29 06:14:29.736199 | orchestrator | Monday 29 September 2025 06:10:46 +0000 (0:00:00.424) 0:02:16.354 ****** 2025-09-29 06:14:29.736205 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.736212 | orchestrator | 2025-09-29 06:14:29.736218 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-29 06:14:29.736225 | orchestrator | Monday 29 September 2025 06:10:46 +0000 (0:00:00.757) 0:02:17.112 ****** 2025-09-29 06:14:29.736237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/conf2025-09-29 06:14:29 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:29.736245 | orchestrator | 2025-09-29 06:14:29 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:29.736252 | orchestrator | 2025-09-29 06:14:29 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:29.736258 | orchestrator | 2025-09-29 06:14:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:29.736269 | orchestrator | ig_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:14:29.736281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:14:29.736288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:14:29.736295 | orchestrator | 2025-09-29 06:14:29.736302 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-29 06:14:29.736308 | orchestrator | Monday 29 September 2025 06:10:49 +0000 (0:00:02.922) 0:02:20.035 ****** 2025-09-29 06:14:29.736315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:14:29.736322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:14:29.736329 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.736335 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.736346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:14:29.736358 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.736365 | orchestrator | 2025-09-29 06:14:29.736522 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-29 06:14:29.736542 | orchestrator | Monday 29 September 2025 06:10:50 +0000 (0:00:00.661) 0:02:20.697 ****** 2025-09-29 06:14:29.736550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736570 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.736582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736625 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.736637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-29 06:14:29.736652 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.736659 | orchestrator | 2025-09-29 06:14:29.736666 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-29 06:14:29.736672 | orchestrator | Monday 29 September 2025 06:10:51 +0000 (0:00:00.633) 0:02:21.330 ****** 2025-09-29 06:14:29.736679 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.736685 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.736692 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.736699 | orchestrator | 2025-09-29 06:14:29.736705 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-29 06:14:29.736712 | orchestrator | Monday 29 September 2025 06:10:52 +0000 (0:00:01.208) 0:02:22.539 ****** 2025-09-29 06:14:29.736719 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.736725 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.736732 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.736739 | orchestrator | 2025-09-29 06:14:29.736745 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-29 06:14:29.736752 | orchestrator | Monday 29 September 2025 06:10:54 +0000 (0:00:01.963) 0:02:24.502 ****** 2025-09-29 06:14:29.736758 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.736765 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.736771 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.736778 | orchestrator | 2025-09-29 06:14:29.736784 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-29 06:14:29.736791 | orchestrator | Monday 29 September 2025 06:10:54 +0000 (0:00:00.395) 0:02:24.898 ****** 2025-09-29 06:14:29.736797 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.736804 | orchestrator | 2025-09-29 06:14:29.736811 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-29 06:14:29.736817 | orchestrator | Monday 29 September 2025 06:10:55 +0000 (0:00:00.789) 0:02:25.688 ****** 2025-09-29 06:14:29.736891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:14:29.736931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:14:29.737058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:14:29.737075 | orchestrator | 2025-09-29 06:14:29.737082 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-29 06:14:29.737089 | orchestrator | Monday 29 September 2025 06:10:58 +0000 (0:00:03.311) 0:02:29.000 ****** 2025-09-29 06:14:29.737096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:14:29.737109 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.737218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:14:29.737236 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.737249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:14:29.737283 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.737296 | orchestrator | 2025-09-29 06:14:29.737303 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-29 06:14:29.737310 | orchestrator | Monday 29 September 2025 06:10:59 +0000 (0:00:00.885) 0:02:29.885 ****** 2025-09-29 06:14:29.737373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-29 06:14:29.737500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-29 06:14:29.737513 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.737519 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.737527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-29 06:14:29.737617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-29 06:14:29.737636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-29 06:14:29.737674 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.737687 | orchestrator | 2025-09-29 06:14:29.737694 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-29 06:14:29.737706 | orchestrator | Monday 29 September 2025 06:11:00 +0000 (0:00:00.922) 0:02:30.807 ****** 2025-09-29 06:14:29.737712 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.737719 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.737725 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.737732 | orchestrator | 2025-09-29 06:14:29.737738 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-29 06:14:29.737745 | orchestrator | Monday 29 September 2025 06:11:01 +0000 (0:00:01.221) 0:02:32.029 ****** 2025-09-29 06:14:29.737751 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.737758 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.737764 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.737771 | orchestrator | 2025-09-29 06:14:29.737777 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-29 06:14:29.737784 | orchestrator | Monday 29 September 2025 06:11:03 +0000 (0:00:01.992) 0:02:34.022 ****** 2025-09-29 06:14:29.737790 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.737796 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.737803 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.737809 | orchestrator | 2025-09-29 06:14:29.737816 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-29 06:14:29.737822 | orchestrator | Monday 29 September 2025 06:11:04 +0000 (0:00:00.322) 0:02:34.345 ****** 2025-09-29 06:14:29.737829 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.737835 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.737850 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.737862 | orchestrator | 2025-09-29 06:14:29.737874 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-29 06:14:29.737886 | orchestrator | Monday 29 September 2025 06:11:04 +0000 (0:00:00.540) 0:02:34.885 ****** 2025-09-29 06:14:29.737910 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.737922 | orchestrator | 2025-09-29 06:14:29.737929 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-29 06:14:29.737936 | orchestrator | Monday 29 September 2025 06:11:05 +0000 (0:00:00.981) 0:02:35.867 ****** 2025-09-29 06:14:29.737943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:14:29.737952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:14:29.738077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:14:29.738113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738204 | orchestrator | 2025-09-29 06:14:29.738216 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-29 06:14:29.738223 | orchestrator | Monday 29 September 2025 06:11:10 +0000 (0:00:05.176) 0:02:41.044 ****** 2025-09-29 06:14:29.738230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:14:29.738243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738263 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.738275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:14:29.738361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738418 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.738446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:14:29.738455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:14:29.738462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:14:29.738469 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.738475 | orchestrator | 2025-09-29 06:14:29.738482 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-29 06:14:29.738489 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:00.813) 0:02:41.857 ****** 2025-09-29 06:14:29.738496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738510 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.738572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738694 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.738702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-29 06:14:29.738716 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.738722 | orchestrator | 2025-09-29 06:14:29.738729 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-29 06:14:29.738735 | orchestrator | Monday 29 September 2025 06:11:12 +0000 (0:00:00.869) 0:02:42.727 ****** 2025-09-29 06:14:29.738742 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.738748 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.738754 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.738761 | orchestrator | 2025-09-29 06:14:29.738767 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-29 06:14:29.738774 | orchestrator | Monday 29 September 2025 06:11:13 +0000 (0:00:01.170) 0:02:43.898 ****** 2025-09-29 06:14:29.738780 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.738787 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.738796 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.738807 | orchestrator | 2025-09-29 06:14:29.738819 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-29 06:14:29.738835 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:01.916) 0:02:45.814 ****** 2025-09-29 06:14:29.738842 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.738849 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.738855 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.738862 | orchestrator | 2025-09-29 06:14:29.738868 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-29 06:14:29.738875 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.435) 0:02:46.249 ****** 2025-09-29 06:14:29.738882 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.738888 | orchestrator | 2025-09-29 06:14:29.738895 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-29 06:14:29.738901 | orchestrator | Monday 29 September 2025 06:11:16 +0000 (0:00:00.882) 0:02:47.132 ****** 2025-09-29 06:14:29.738908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:14:29.738915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.738992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:14:29.739014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:14:29.739028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739035 | orchestrator | 2025-09-29 06:14:29.739042 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-29 06:14:29.739048 | orchestrator | Monday 29 September 2025 06:11:20 +0000 (0:00:03.205) 0:02:50.338 ****** 2025-09-29 06:14:29.739061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:14:29.739119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:14:29.739130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739143 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.739149 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.739156 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:14:29.739169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739175 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.739181 | orchestrator | 2025-09-29 06:14:29.739224 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-29 06:14:29.739248 | orchestrator | Monday 29 September 2025 06:11:20 +0000 (0:00:00.928) 0:02:51.266 ****** 2025-09-29 06:14:29.739274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739287 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.739293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739306 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.739312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-29 06:14:29.739324 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.739331 | orchestrator | 2025-09-29 06:14:29.739337 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-29 06:14:29.739346 | orchestrator | Monday 29 September 2025 06:11:21 +0000 (0:00:00.772) 0:02:52.038 ****** 2025-09-29 06:14:29.739357 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.739374 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.739381 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.739387 | orchestrator | 2025-09-29 06:14:29.739394 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-29 06:14:29.739400 | orchestrator | Monday 29 September 2025 06:11:22 +0000 (0:00:01.204) 0:02:53.242 ****** 2025-09-29 06:14:29.739406 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.739412 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.739418 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.739424 | orchestrator | 2025-09-29 06:14:29.739431 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-29 06:14:29.739448 | orchestrator | Monday 29 September 2025 06:11:24 +0000 (0:00:01.865) 0:02:55.108 ****** 2025-09-29 06:14:29.739466 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.739472 | orchestrator | 2025-09-29 06:14:29.739478 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-29 06:14:29.739484 | orchestrator | Monday 29 September 2025 06:11:25 +0000 (0:00:01.097) 0:02:56.205 ****** 2025-09-29 06:14:29.739491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-29 06:14:29.739551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-29 06:14:29.739561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-29 06:14:29.739598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739749 | orchestrator | 2025-09-29 06:14:29.739755 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-29 06:14:29.739761 | orchestrator | Monday 29 September 2025 06:11:29 +0000 (0:00:03.479) 0:02:59.685 ****** 2025-09-29 06:14:29.739768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-29 06:14:29.739778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739912 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.739924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-29 06:14:29.739943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.739992 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.740059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-29 06:14:29.740086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.740104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.740115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.740127 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.740150 | orchestrator | 2025-09-29 06:14:29.740162 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-29 06:14:29.740169 | orchestrator | Monday 29 September 2025 06:11:30 +0000 (0:00:00.657) 0:03:00.343 ****** 2025-09-29 06:14:29.740175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740187 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.740193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740205 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.740211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-29 06:14:29.740224 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.740229 | orchestrator | 2025-09-29 06:14:29.740235 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-29 06:14:29.740242 | orchestrator | Monday 29 September 2025 06:11:31 +0000 (0:00:01.127) 0:03:01.470 ****** 2025-09-29 06:14:29.740301 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.740316 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.740327 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.740361 | orchestrator | 2025-09-29 06:14:29.740372 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-29 06:14:29.740378 | orchestrator | Monday 29 September 2025 06:11:32 +0000 (0:00:01.193) 0:03:02.663 ****** 2025-09-29 06:14:29.740384 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.740395 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.740401 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.740407 | orchestrator | 2025-09-29 06:14:29.740420 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-29 06:14:29.740427 | orchestrator | Monday 29 September 2025 06:11:34 +0000 (0:00:02.003) 0:03:04.667 ****** 2025-09-29 06:14:29.740433 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.740439 | orchestrator | 2025-09-29 06:14:29.740445 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-29 06:14:29.740451 | orchestrator | Monday 29 September 2025 06:11:35 +0000 (0:00:01.352) 0:03:06.020 ****** 2025-09-29 06:14:29.740457 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:14:29.740463 | orchestrator | 2025-09-29 06:14:29.740469 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-29 06:14:29.740475 | orchestrator | Monday 29 September 2025 06:11:38 +0000 (0:00:02.987) 0:03:09.007 ****** 2025-09-29 06:14:29.740482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740635 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.740643 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.740652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740750 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.740761 | orchestrator | 2025-09-29 06:14:29.740768 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-29 06:14:29.740778 | orchestrator | Monday 29 September 2025 06:11:40 +0000 (0:00:02.115) 0:03:11.122 ****** 2025-09-29 06:14:29.740788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740811 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.740897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740940 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.740946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:14:29.740953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-29 06:14:29.740963 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.740969 | orchestrator | 2025-09-29 06:14:29.740976 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-29 06:14:29.740982 | orchestrator | Monday 29 September 2025 06:11:42 +0000 (0:00:02.112) 0:03:13.235 ****** 2025-09-29 06:14:29.741067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741109 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741128 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-29 06:14:29.741155 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741166 | orchestrator | 2025-09-29 06:14:29.741176 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-29 06:14:29.741187 | orchestrator | Monday 29 September 2025 06:11:45 +0000 (0:00:02.379) 0:03:15.614 ****** 2025-09-29 06:14:29.741211 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.741219 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.741225 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.741231 | orchestrator | 2025-09-29 06:14:29.741237 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-29 06:14:29.741243 | orchestrator | Monday 29 September 2025 06:11:46 +0000 (0:00:01.661) 0:03:17.276 ****** 2025-09-29 06:14:29.741249 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741255 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741261 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741267 | orchestrator | 2025-09-29 06:14:29.741273 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-29 06:14:29.741279 | orchestrator | Monday 29 September 2025 06:11:48 +0000 (0:00:01.209) 0:03:18.485 ****** 2025-09-29 06:14:29.741339 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741354 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741365 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741401 | orchestrator | 2025-09-29 06:14:29.741408 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-29 06:14:29.741415 | orchestrator | Monday 29 September 2025 06:11:48 +0000 (0:00:00.265) 0:03:18.750 ****** 2025-09-29 06:14:29.741424 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.741431 | orchestrator | 2025-09-29 06:14:29.741437 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-29 06:14:29.741443 | orchestrator | Monday 29 September 2025 06:11:49 +0000 (0:00:01.184) 0:03:19.935 ****** 2025-09-29 06:14:29.741449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-29 06:14:29.741457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-29 06:14:29.741463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-29 06:14:29.741476 | orchestrator | 2025-09-29 06:14:29.741482 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-29 06:14:29.741488 | orchestrator | Monday 29 September 2025 06:11:51 +0000 (0:00:01.511) 0:03:21.446 ****** 2025-09-29 06:14:29.741494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-29 06:14:29.741501 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-29 06:14:29.741590 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-29 06:14:29.741623 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741634 | orchestrator | 2025-09-29 06:14:29.741645 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-29 06:14:29.741668 | orchestrator | Monday 29 September 2025 06:11:51 +0000 (0:00:00.361) 0:03:21.807 ****** 2025-09-29 06:14:29.741680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-29 06:14:29.741687 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-29 06:14:29.741706 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-29 06:14:29.741718 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741724 | orchestrator | 2025-09-29 06:14:29.741730 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-29 06:14:29.741736 | orchestrator | Monday 29 September 2025 06:11:52 +0000 (0:00:00.697) 0:03:22.505 ****** 2025-09-29 06:14:29.741742 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741748 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741755 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741761 | orchestrator | 2025-09-29 06:14:29.741766 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-29 06:14:29.741773 | orchestrator | Monday 29 September 2025 06:11:52 +0000 (0:00:00.366) 0:03:22.872 ****** 2025-09-29 06:14:29.741779 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741785 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741791 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741797 | orchestrator | 2025-09-29 06:14:29.741803 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-29 06:14:29.741811 | orchestrator | Monday 29 September 2025 06:11:53 +0000 (0:00:01.068) 0:03:23.941 ****** 2025-09-29 06:14:29.741822 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.741833 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.741856 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.741865 | orchestrator | 2025-09-29 06:14:29.741872 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-29 06:14:29.741878 | orchestrator | Monday 29 September 2025 06:11:53 +0000 (0:00:00.290) 0:03:24.231 ****** 2025-09-29 06:14:29.741884 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.741890 | orchestrator | 2025-09-29 06:14:29.741895 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-29 06:14:29.741902 | orchestrator | Monday 29 September 2025 06:11:55 +0000 (0:00:01.446) 0:03:25.677 ****** 2025-09-29 06:14:29.741968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:14:29.741997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.742063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.742183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:14:29.742190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.742416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.742459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.742468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:14:29.742694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.742702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.742856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.742928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.742961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.742984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.742994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.743142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743151 | orchestrator | 2025-09-29 06:14:29.743169 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-29 06:14:29.743175 | orchestrator | Monday 29 September 2025 06:11:59 +0000 (0:00:04.163) 0:03:29.840 ****** 2025-09-29 06:14:29.743234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:14:29.743248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:14:29.743260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.743376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.743472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:14:29.743617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-29 06:14:29.743841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.743854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.743904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743910 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.743915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743929 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.743939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.743984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.743990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-29 06:14:29.743996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-29 06:14:29.744065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-29 06:14:29.744093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:14:29.744098 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744104 | orchestrator | 2025-09-29 06:14:29.744109 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-29 06:14:29.744115 | orchestrator | Monday 29 September 2025 06:12:01 +0000 (0:00:01.502) 0:03:31.343 ****** 2025-09-29 06:14:29.744121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744132 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.744156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744171 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.744176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-29 06:14:29.744187 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744192 | orchestrator | 2025-09-29 06:14:29.744198 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-29 06:14:29.744203 | orchestrator | Monday 29 September 2025 06:12:02 +0000 (0:00:01.949) 0:03:33.292 ****** 2025-09-29 06:14:29.744209 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.744214 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.744219 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.744225 | orchestrator | 2025-09-29 06:14:29.744230 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-29 06:14:29.744235 | orchestrator | Monday 29 September 2025 06:12:04 +0000 (0:00:01.247) 0:03:34.540 ****** 2025-09-29 06:14:29.744240 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.744246 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.744251 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.744256 | orchestrator | 2025-09-29 06:14:29.744262 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-29 06:14:29.744267 | orchestrator | Monday 29 September 2025 06:12:06 +0000 (0:00:02.071) 0:03:36.611 ****** 2025-09-29 06:14:29.744276 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.744281 | orchestrator | 2025-09-29 06:14:29.744286 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-29 06:14:29.744292 | orchestrator | Monday 29 September 2025 06:12:07 +0000 (0:00:01.157) 0:03:37.769 ****** 2025-09-29 06:14:29.744297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744332 | orchestrator | 2025-09-29 06:14:29.744337 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-29 06:14:29.744342 | orchestrator | Monday 29 September 2025 06:12:10 +0000 (0:00:03.293) 0:03:41.062 ****** 2025-09-29 06:14:29.744348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744358 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.744363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744369 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.744374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744380 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744385 | orchestrator | 2025-09-29 06:14:29.744390 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-29 06:14:29.744396 | orchestrator | Monday 29 September 2025 06:12:11 +0000 (0:00:00.439) 0:03:41.501 ****** 2025-09-29 06:14:29.744401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744412 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.744430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744446 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.744452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744468 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744474 | orchestrator | 2025-09-29 06:14:29.744480 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-29 06:14:29.744489 | orchestrator | Monday 29 September 2025 06:12:11 +0000 (0:00:00.684) 0:03:42.185 ****** 2025-09-29 06:14:29.744498 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.744507 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.744517 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.744528 | orchestrator | 2025-09-29 06:14:29.744538 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-29 06:14:29.744544 | orchestrator | Monday 29 September 2025 06:12:13 +0000 (0:00:01.564) 0:03:43.750 ****** 2025-09-29 06:14:29.744550 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.744557 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.744563 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.744569 | orchestrator | 2025-09-29 06:14:29.744575 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-29 06:14:29.744581 | orchestrator | Monday 29 September 2025 06:12:15 +0000 (0:00:01.685) 0:03:45.435 ****** 2025-09-29 06:14:29.744587 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.744593 | orchestrator | 2025-09-29 06:14:29.744599 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-29 06:14:29.744622 | orchestrator | Monday 29 September 2025 06:12:16 +0000 (0:00:01.353) 0:03:46.789 ****** 2025-09-29 06:14:29.744629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.744714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744735 | orchestrator | 2025-09-29 06:14:29.744741 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-29 06:14:29.744747 | orchestrator | Monday 29 September 2025 06:12:20 +0000 (0:00:03.877) 0:03:50.666 ****** 2025-09-29 06:14:29.744754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744774 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.744795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744817 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.744823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.744829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.744858 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744864 | orchestrator | 2025-09-29 06:14:29.744871 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-29 06:14:29.744877 | orchestrator | Monday 29 September 2025 06:12:21 +0000 (0:00:00.866) 0:03:51.532 ****** 2025-09-29 06:14:29.744882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744905 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.744910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744932 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.744937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-29 06:14:29.744962 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.744968 | orchestrator | 2025-09-29 06:14:29.744973 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-29 06:14:29.744978 | orchestrator | Monday 29 September 2025 06:12:21 +0000 (0:00:00.759) 0:03:52.292 ****** 2025-09-29 06:14:29.744984 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.744989 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.744994 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.745000 | orchestrator | 2025-09-29 06:14:29.745005 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-29 06:14:29.745010 | orchestrator | Monday 29 September 2025 06:12:23 +0000 (0:00:01.273) 0:03:53.566 ****** 2025-09-29 06:14:29.745015 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.745021 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.745026 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.745031 | orchestrator | 2025-09-29 06:14:29.745037 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-29 06:14:29.745042 | orchestrator | Monday 29 September 2025 06:12:25 +0000 (0:00:01.881) 0:03:55.447 ****** 2025-09-29 06:14:29.745047 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.745053 | orchestrator | 2025-09-29 06:14:29.745058 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-29 06:14:29.745076 | orchestrator | Monday 29 September 2025 06:12:26 +0000 (0:00:01.387) 0:03:56.835 ****** 2025-09-29 06:14:29.745082 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-29 06:14:29.745087 | orchestrator | 2025-09-29 06:14:29.745093 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-29 06:14:29.745100 | orchestrator | Monday 29 September 2025 06:12:27 +0000 (0:00:00.722) 0:03:57.557 ****** 2025-09-29 06:14:29.745106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-29 06:14:29.745112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-29 06:14:29.745118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-29 06:14:29.745124 | orchestrator | 2025-09-29 06:14:29.745129 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-29 06:14:29.745135 | orchestrator | Monday 29 September 2025 06:12:31 +0000 (0:00:03.787) 0:04:01.345 ****** 2025-09-29 06:14:29.745140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745149 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745166 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745171 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745176 | orchestrator | 2025-09-29 06:14:29.745182 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-29 06:14:29.745187 | orchestrator | Monday 29 September 2025 06:12:31 +0000 (0:00:00.866) 0:04:02.211 ****** 2025-09-29 06:14:29.745203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745218 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745235 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-29 06:14:29.745251 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745257 | orchestrator | 2025-09-29 06:14:29.745262 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-29 06:14:29.745267 | orchestrator | Monday 29 September 2025 06:12:33 +0000 (0:00:01.311) 0:04:03.522 ****** 2025-09-29 06:14:29.745276 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.745281 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.745286 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.745292 | orchestrator | 2025-09-29 06:14:29.745297 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-29 06:14:29.745302 | orchestrator | Monday 29 September 2025 06:12:35 +0000 (0:00:02.148) 0:04:05.670 ****** 2025-09-29 06:14:29.745308 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.745313 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.745318 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.745324 | orchestrator | 2025-09-29 06:14:29.745329 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-29 06:14:29.745334 | orchestrator | Monday 29 September 2025 06:12:37 +0000 (0:00:02.587) 0:04:08.258 ****** 2025-09-29 06:14:29.745340 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-29 06:14:29.745346 | orchestrator | 2025-09-29 06:14:29.745351 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-29 06:14:29.745356 | orchestrator | Monday 29 September 2025 06:12:39 +0000 (0:00:01.075) 0:04:09.334 ****** 2025-09-29 06:14:29.745362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745367 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745378 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745402 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745408 | orchestrator | 2025-09-29 06:14:29.745413 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-29 06:14:29.745418 | orchestrator | Monday 29 September 2025 06:12:40 +0000 (0:00:01.049) 0:04:10.384 ****** 2025-09-29 06:14:29.745427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745436 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745447 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-29 06:14:29.745458 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745463 | orchestrator | 2025-09-29 06:14:29.745468 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-29 06:14:29.745474 | orchestrator | Monday 29 September 2025 06:12:41 +0000 (0:00:01.128) 0:04:11.512 ****** 2025-09-29 06:14:29.745479 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745484 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745489 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745495 | orchestrator | 2025-09-29 06:14:29.745500 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-29 06:14:29.745505 | orchestrator | Monday 29 September 2025 06:12:42 +0000 (0:00:01.530) 0:04:13.043 ****** 2025-09-29 06:14:29.745511 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.745516 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.745521 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.745527 | orchestrator | 2025-09-29 06:14:29.745532 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-29 06:14:29.745537 | orchestrator | Monday 29 September 2025 06:12:44 +0000 (0:00:02.090) 0:04:15.134 ****** 2025-09-29 06:14:29.745543 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.745548 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.745553 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.745558 | orchestrator | 2025-09-29 06:14:29.745564 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-29 06:14:29.745569 | orchestrator | Monday 29 September 2025 06:12:47 +0000 (0:00:02.893) 0:04:18.028 ****** 2025-09-29 06:14:29.745574 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-29 06:14:29.745580 | orchestrator | 2025-09-29 06:14:29.745585 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-29 06:14:29.745590 | orchestrator | Monday 29 September 2025 06:12:48 +0000 (0:00:00.743) 0:04:18.771 ****** 2025-09-29 06:14:29.745596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745601 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745642 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745655 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745660 | orchestrator | 2025-09-29 06:14:29.745666 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-29 06:14:29.745671 | orchestrator | Monday 29 September 2025 06:12:49 +0000 (0:00:01.105) 0:04:19.876 ****** 2025-09-29 06:14:29.745676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745682 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745693 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-29 06:14:29.745704 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745709 | orchestrator | 2025-09-29 06:14:29.745714 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-29 06:14:29.745719 | orchestrator | Monday 29 September 2025 06:12:50 +0000 (0:00:01.138) 0:04:21.015 ****** 2025-09-29 06:14:29.745724 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.745730 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.745735 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.745740 | orchestrator | 2025-09-29 06:14:29.745745 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-29 06:14:29.745751 | orchestrator | Monday 29 September 2025 06:12:52 +0000 (0:00:01.337) 0:04:22.353 ****** 2025-09-29 06:14:29.745756 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.745761 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.745769 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.745774 | orchestrator | 2025-09-29 06:14:29.745780 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-29 06:14:29.745785 | orchestrator | Monday 29 September 2025 06:12:54 +0000 (0:00:02.004) 0:04:24.357 ****** 2025-09-29 06:14:29.745790 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.745795 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.745801 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.745806 | orchestrator | 2025-09-29 06:14:29.745811 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-29 06:14:29.745816 | orchestrator | Monday 29 September 2025 06:12:56 +0000 (0:00:02.930) 0:04:27.288 ****** 2025-09-29 06:14:29.745822 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.745827 | orchestrator | 2025-09-29 06:14:29.745832 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-29 06:14:29.745837 | orchestrator | Monday 29 September 2025 06:12:58 +0000 (0:00:01.616) 0:04:28.905 ****** 2025-09-29 06:14:29.745858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.745865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.745870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.745876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.745885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.745923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.745934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.745945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.745963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.745977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.745983 | orchestrator | 2025-09-29 06:14:29.745988 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-29 06:14:29.745994 | orchestrator | Monday 29 September 2025 06:13:01 +0000 (0:00:03.316) 0:04:32.221 ****** 2025-09-29 06:14:29.745999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.746008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.746031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.746068 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.746080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.746089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.746118 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.746133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-29 06:14:29.746138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-29 06:14:29.746153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:14:29.746158 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746164 | orchestrator | 2025-09-29 06:14:29.746169 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-29 06:14:29.746174 | orchestrator | Monday 29 September 2025 06:13:02 +0000 (0:00:00.711) 0:04:32.933 ****** 2025-09-29 06:14:29.746180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746190 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746222 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-29 06:14:29.746239 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746244 | orchestrator | 2025-09-29 06:14:29.746249 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-29 06:14:29.746254 | orchestrator | Monday 29 September 2025 06:13:03 +0000 (0:00:01.199) 0:04:34.132 ****** 2025-09-29 06:14:29.746260 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.746265 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.746270 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.746275 | orchestrator | 2025-09-29 06:14:29.746280 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-29 06:14:29.746286 | orchestrator | Monday 29 September 2025 06:13:05 +0000 (0:00:01.193) 0:04:35.326 ****** 2025-09-29 06:14:29.746294 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.746299 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.746304 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.746310 | orchestrator | 2025-09-29 06:14:29.746315 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-29 06:14:29.746320 | orchestrator | Monday 29 September 2025 06:13:06 +0000 (0:00:01.893) 0:04:37.220 ****** 2025-09-29 06:14:29.746326 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.746331 | orchestrator | 2025-09-29 06:14:29.746336 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-29 06:14:29.746341 | orchestrator | Monday 29 September 2025 06:13:08 +0000 (0:00:01.252) 0:04:38.472 ****** 2025-09-29 06:14:29.746347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:14:29.746353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:14:29.746371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:14:29.746380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:14:29.746390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:14:29.746396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:14:29.746402 | orchestrator | 2025-09-29 06:14:29.746407 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-29 06:14:29.746413 | orchestrator | Monday 29 September 2025 06:13:13 +0000 (0:00:05.114) 0:04:43.587 ****** 2025-09-29 06:14:29.746430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:14:29.746439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:14:29.746450 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:14:29.746462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:14:29.746467 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:14:29.746494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:14:29.746503 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746508 | orchestrator | 2025-09-29 06:14:29.746514 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-29 06:14:29.746519 | orchestrator | Monday 29 September 2025 06:13:13 +0000 (0:00:00.617) 0:04:44.205 ****** 2025-09-29 06:14:29.746524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-29 06:14:29.746530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746541 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-29 06:14:29.746551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746562 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-29 06:14:29.746573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-29 06:14:29.746584 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746589 | orchestrator | 2025-09-29 06:14:29.746594 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-29 06:14:29.746600 | orchestrator | Monday 29 September 2025 06:13:14 +0000 (0:00:00.889) 0:04:45.094 ****** 2025-09-29 06:14:29.746692 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746704 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746710 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746715 | orchestrator | 2025-09-29 06:14:29.746721 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-29 06:14:29.746734 | orchestrator | Monday 29 September 2025 06:13:15 +0000 (0:00:00.782) 0:04:45.877 ****** 2025-09-29 06:14:29.746740 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.746745 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.746750 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.746755 | orchestrator | 2025-09-29 06:14:29.746786 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-29 06:14:29.746793 | orchestrator | Monday 29 September 2025 06:13:16 +0000 (0:00:01.282) 0:04:47.159 ****** 2025-09-29 06:14:29.746798 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.746803 | orchestrator | 2025-09-29 06:14:29.746809 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-29 06:14:29.746818 | orchestrator | Monday 29 September 2025 06:13:18 +0000 (0:00:01.435) 0:04:48.595 ****** 2025-09-29 06:14:29.746824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:14:29.746831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.746837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.746854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:14:29.746879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.746886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:14:29.746892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.746908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.746917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.746939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:14:29.746945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.746951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.746966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.746979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:14:29.746985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:14:29.746991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.746997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.747008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747044 | orchestrator | 2025-09-29 06:14:29.747050 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-29 06:14:29.747058 | orchestrator | Monday 29 September 2025 06:13:22 +0000 (0:00:04.469) 0:04:53.064 ****** 2025-09-29 06:14:29.747064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-29 06:14:29.747069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.747080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-29 06:14:29.747107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.747112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-29 06:14:29.747135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.747146 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-29 06:14:29.747184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.747189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747206 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-29 06:14:29.747219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:14:29.747224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-29 06:14:29.747260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-29 06:14:29.747265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:14:29.747280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:14:29.747285 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747290 | orchestrator | 2025-09-29 06:14:29.747294 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-29 06:14:29.747299 | orchestrator | Monday 29 September 2025 06:13:23 +0000 (0:00:01.232) 0:04:54.297 ****** 2025-09-29 06:14:29.747304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747328 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747353 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-29 06:14:29.747368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-29 06:14:29.747380 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747385 | orchestrator | 2025-09-29 06:14:29.747390 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-29 06:14:29.747394 | orchestrator | Monday 29 September 2025 06:13:25 +0000 (0:00:01.064) 0:04:55.361 ****** 2025-09-29 06:14:29.747401 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747406 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747423 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747429 | orchestrator | 2025-09-29 06:14:29.747433 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-29 06:14:29.747438 | orchestrator | Monday 29 September 2025 06:13:25 +0000 (0:00:00.453) 0:04:55.815 ****** 2025-09-29 06:14:29.747443 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747447 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747452 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747457 | orchestrator | 2025-09-29 06:14:29.747467 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-29 06:14:29.747472 | orchestrator | Monday 29 September 2025 06:13:27 +0000 (0:00:01.534) 0:04:57.350 ****** 2025-09-29 06:14:29.747476 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.747481 | orchestrator | 2025-09-29 06:14:29.747488 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-29 06:14:29.747493 | orchestrator | Monday 29 September 2025 06:13:28 +0000 (0:00:01.924) 0:04:59.275 ****** 2025-09-29 06:14:29.747498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:14:29.747503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:14:29.747509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-29 06:14:29.747514 | orchestrator | 2025-09-29 06:14:29.747521 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-29 06:14:29.747526 | orchestrator | Monday 29 September 2025 06:13:31 +0000 (0:00:02.519) 0:05:01.795 ****** 2025-09-29 06:14:29.747533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-29 06:14:29.747543 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-29 06:14:29.747553 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-29 06:14:29.747563 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747568 | orchestrator | 2025-09-29 06:14:29.747572 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-29 06:14:29.747577 | orchestrator | Monday 29 September 2025 06:13:31 +0000 (0:00:00.406) 0:05:02.201 ****** 2025-09-29 06:14:29.747582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-29 06:14:29.747587 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-29 06:14:29.747597 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-29 06:14:29.747619 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747624 | orchestrator | 2025-09-29 06:14:29.747629 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-29 06:14:29.747672 | orchestrator | Monday 29 September 2025 06:13:32 +0000 (0:00:01.016) 0:05:03.217 ****** 2025-09-29 06:14:29.747682 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747687 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747692 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747696 | orchestrator | 2025-09-29 06:14:29.747701 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-29 06:14:29.747706 | orchestrator | Monday 29 September 2025 06:13:33 +0000 (0:00:00.418) 0:05:03.636 ****** 2025-09-29 06:14:29.747711 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747718 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747723 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747728 | orchestrator | 2025-09-29 06:14:29.747732 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-29 06:14:29.747737 | orchestrator | Monday 29 September 2025 06:13:34 +0000 (0:00:01.349) 0:05:04.986 ****** 2025-09-29 06:14:29.747742 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:14:29.747747 | orchestrator | 2025-09-29 06:14:29.747752 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-29 06:14:29.747756 | orchestrator | Monday 29 September 2025 06:13:36 +0000 (0:00:01.823) 0:05:06.809 ****** 2025-09-29 06:14:29.747761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-29 06:14:29.747803 | orchestrator | 2025-09-29 06:14:29.747808 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-29 06:14:29.747813 | orchestrator | Monday 29 September 2025 06:13:42 +0000 (0:00:06.323) 0:05:13.132 ****** 2025-09-29 06:14:29.747818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747833 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747852 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-29 06:14:29.747871 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747875 | orchestrator | 2025-09-29 06:14:29.747880 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-29 06:14:29.747887 | orchestrator | Monday 29 September 2025 06:13:43 +0000 (0:00:00.669) 0:05:13.802 ****** 2025-09-29 06:14:29.747892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747914 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.747919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747939 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.747944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-29 06:14:29.747966 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.747971 | orchestrator | 2025-09-29 06:14:29.747976 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-29 06:14:29.747980 | orchestrator | Monday 29 September 2025 06:13:45 +0000 (0:00:01.632) 0:05:15.435 ****** 2025-09-29 06:14:29.747985 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.747990 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.747995 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.747999 | orchestrator | 2025-09-29 06:14:29.748004 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-29 06:14:29.748009 | orchestrator | Monday 29 September 2025 06:13:46 +0000 (0:00:01.328) 0:05:16.764 ****** 2025-09-29 06:14:29.748014 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.748018 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.748023 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.748028 | orchestrator | 2025-09-29 06:14:29.748033 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-29 06:14:29.748038 | orchestrator | Monday 29 September 2025 06:13:48 +0000 (0:00:02.168) 0:05:18.932 ****** 2025-09-29 06:14:29.748042 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748047 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748052 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748056 | orchestrator | 2025-09-29 06:14:29.748061 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-29 06:14:29.748066 | orchestrator | Monday 29 September 2025 06:13:48 +0000 (0:00:00.269) 0:05:19.202 ****** 2025-09-29 06:14:29.748071 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748076 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748080 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748085 | orchestrator | 2025-09-29 06:14:29.748090 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-29 06:14:29.748096 | orchestrator | Monday 29 September 2025 06:13:49 +0000 (0:00:00.270) 0:05:19.472 ****** 2025-09-29 06:14:29.748101 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748106 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748111 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748116 | orchestrator | 2025-09-29 06:14:29.748120 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-29 06:14:29.748125 | orchestrator | Monday 29 September 2025 06:13:49 +0000 (0:00:00.470) 0:05:19.943 ****** 2025-09-29 06:14:29.748132 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748137 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748141 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748146 | orchestrator | 2025-09-29 06:14:29.748151 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-29 06:14:29.748156 | orchestrator | Monday 29 September 2025 06:13:49 +0000 (0:00:00.271) 0:05:20.214 ****** 2025-09-29 06:14:29.748161 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748165 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748170 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748175 | orchestrator | 2025-09-29 06:14:29.748180 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-29 06:14:29.748184 | orchestrator | Monday 29 September 2025 06:13:50 +0000 (0:00:00.272) 0:05:20.486 ****** 2025-09-29 06:14:29.748189 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748194 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748198 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748203 | orchestrator | 2025-09-29 06:14:29.748208 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-29 06:14:29.748216 | orchestrator | Monday 29 September 2025 06:13:50 +0000 (0:00:00.637) 0:05:21.124 ****** 2025-09-29 06:14:29.748221 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748226 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748231 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748235 | orchestrator | 2025-09-29 06:14:29.748240 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-29 06:14:29.748245 | orchestrator | Monday 29 September 2025 06:13:51 +0000 (0:00:00.702) 0:05:21.827 ****** 2025-09-29 06:14:29.748249 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748254 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748259 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748264 | orchestrator | 2025-09-29 06:14:29.748268 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-29 06:14:29.748273 | orchestrator | Monday 29 September 2025 06:13:51 +0000 (0:00:00.285) 0:05:22.113 ****** 2025-09-29 06:14:29.748278 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748283 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748287 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748292 | orchestrator | 2025-09-29 06:14:29.748297 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-29 06:14:29.748302 | orchestrator | Monday 29 September 2025 06:13:52 +0000 (0:00:00.780) 0:05:22.893 ****** 2025-09-29 06:14:29.748306 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748311 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748316 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748320 | orchestrator | 2025-09-29 06:14:29.748325 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-29 06:14:29.748330 | orchestrator | Monday 29 September 2025 06:13:53 +0000 (0:00:01.059) 0:05:23.952 ****** 2025-09-29 06:14:29.748335 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748339 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748344 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748349 | orchestrator | 2025-09-29 06:14:29.748354 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-29 06:14:29.748358 | orchestrator | Monday 29 September 2025 06:13:54 +0000 (0:00:00.818) 0:05:24.771 ****** 2025-09-29 06:14:29.748363 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.748368 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.748373 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.748377 | orchestrator | 2025-09-29 06:14:29.748382 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-29 06:14:29.748387 | orchestrator | Monday 29 September 2025 06:13:59 +0000 (0:00:04.555) 0:05:29.326 ****** 2025-09-29 06:14:29.748392 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748396 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748401 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748406 | orchestrator | 2025-09-29 06:14:29.748411 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-29 06:14:29.748415 | orchestrator | Monday 29 September 2025 06:14:02 +0000 (0:00:03.689) 0:05:33.016 ****** 2025-09-29 06:14:29.748420 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.748425 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.748430 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.748434 | orchestrator | 2025-09-29 06:14:29.748439 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-29 06:14:29.748444 | orchestrator | Monday 29 September 2025 06:14:11 +0000 (0:00:08.399) 0:05:41.416 ****** 2025-09-29 06:14:29.748449 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748453 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748458 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748463 | orchestrator | 2025-09-29 06:14:29.748468 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-29 06:14:29.748472 | orchestrator | Monday 29 September 2025 06:14:15 +0000 (0:00:04.187) 0:05:45.604 ****** 2025-09-29 06:14:29.748480 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:14:29.748485 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:14:29.748490 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:14:29.748494 | orchestrator | 2025-09-29 06:14:29.748499 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-29 06:14:29.748504 | orchestrator | Monday 29 September 2025 06:14:19 +0000 (0:00:04.051) 0:05:49.656 ****** 2025-09-29 06:14:29.748509 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748513 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748518 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748523 | orchestrator | 2025-09-29 06:14:29.748528 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-29 06:14:29.748532 | orchestrator | Monday 29 September 2025 06:14:19 +0000 (0:00:00.279) 0:05:49.935 ****** 2025-09-29 06:14:29.748537 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748544 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748549 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748553 | orchestrator | 2025-09-29 06:14:29.748558 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-29 06:14:29.748563 | orchestrator | Monday 29 September 2025 06:14:19 +0000 (0:00:00.276) 0:05:50.211 ****** 2025-09-29 06:14:29.748568 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748572 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748579 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748584 | orchestrator | 2025-09-29 06:14:29.748589 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-29 06:14:29.748594 | orchestrator | Monday 29 September 2025 06:14:20 +0000 (0:00:00.496) 0:05:50.708 ****** 2025-09-29 06:14:29.748599 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748613 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748618 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748623 | orchestrator | 2025-09-29 06:14:29.748628 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-29 06:14:29.748633 | orchestrator | Monday 29 September 2025 06:14:20 +0000 (0:00:00.289) 0:05:50.998 ****** 2025-09-29 06:14:29.748637 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748642 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748647 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748651 | orchestrator | 2025-09-29 06:14:29.748656 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-29 06:14:29.748661 | orchestrator | Monday 29 September 2025 06:14:20 +0000 (0:00:00.280) 0:05:51.278 ****** 2025-09-29 06:14:29.748666 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:14:29.748670 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:14:29.748675 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:14:29.748680 | orchestrator | 2025-09-29 06:14:29.748685 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-29 06:14:29.748690 | orchestrator | Monday 29 September 2025 06:14:21 +0000 (0:00:00.291) 0:05:51.570 ****** 2025-09-29 06:14:29.748694 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748699 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748704 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748709 | orchestrator | 2025-09-29 06:14:29.748713 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-29 06:14:29.748718 | orchestrator | Monday 29 September 2025 06:14:26 +0000 (0:00:04.959) 0:05:56.529 ****** 2025-09-29 06:14:29.748723 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:14:29.748728 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:14:29.748732 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:14:29.748737 | orchestrator | 2025-09-29 06:14:29.748742 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:14:29.748747 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-29 06:14:29.748755 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-29 06:14:29.748760 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-29 06:14:29.748765 | orchestrator | 2025-09-29 06:14:29.748770 | orchestrator | 2025-09-29 06:14:29.748774 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:14:29.748779 | orchestrator | Monday 29 September 2025 06:14:26 +0000 (0:00:00.710) 0:05:57.240 ****** 2025-09-29 06:14:29.748784 | orchestrator | =============================================================================== 2025-09-29 06:14:29.748789 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.40s 2025-09-29 06:14:29.748793 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.32s 2025-09-29 06:14:29.748798 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.06s 2025-09-29 06:14:29.748803 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.18s 2025-09-29 06:14:29.748808 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.14s 2025-09-29 06:14:29.748813 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.11s 2025-09-29 06:14:29.748817 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.96s 2025-09-29 06:14:29.748822 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.56s 2025-09-29 06:14:29.748827 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2025-09-29 06:14:29.748832 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.19s 2025-09-29 06:14:29.748836 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.16s 2025-09-29 06:14:29.748841 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.11s 2025-09-29 06:14:29.748846 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.09s 2025-09-29 06:14:29.748851 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.05s 2025-09-29 06:14:29.748855 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.88s 2025-09-29 06:14:29.748860 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.79s 2025-09-29 06:14:29.748865 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.79s 2025-09-29 06:14:29.748870 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.76s 2025-09-29 06:14:29.748874 | orchestrator | loadbalancer : Wait for backup haproxy to start ------------------------- 3.69s 2025-09-29 06:14:29.748879 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.60s 2025-09-29 06:14:32.772775 | orchestrator | 2025-09-29 06:14:32 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:32.776754 | orchestrator | 2025-09-29 06:14:32 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:32.780500 | orchestrator | 2025-09-29 06:14:32 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:32.780532 | orchestrator | 2025-09-29 06:14:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:35.809967 | orchestrator | 2025-09-29 06:14:35 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:35.811010 | orchestrator | 2025-09-29 06:14:35 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:35.813575 | orchestrator | 2025-09-29 06:14:35 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:35.813637 | orchestrator | 2025-09-29 06:14:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:38.850993 | orchestrator | 2025-09-29 06:14:38 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:38.851102 | orchestrator | 2025-09-29 06:14:38 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:38.852064 | orchestrator | 2025-09-29 06:14:38 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:38.852093 | orchestrator | 2025-09-29 06:14:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:41.886540 | orchestrator | 2025-09-29 06:14:41 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:41.887565 | orchestrator | 2025-09-29 06:14:41 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:41.888520 | orchestrator | 2025-09-29 06:14:41 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:41.888545 | orchestrator | 2025-09-29 06:14:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:44.915880 | orchestrator | 2025-09-29 06:14:44 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:44.916094 | orchestrator | 2025-09-29 06:14:44 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:44.916932 | orchestrator | 2025-09-29 06:14:44 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:44.917102 | orchestrator | 2025-09-29 06:14:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:47.946213 | orchestrator | 2025-09-29 06:14:47 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:47.947485 | orchestrator | 2025-09-29 06:14:47 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:47.948203 | orchestrator | 2025-09-29 06:14:47 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:47.948222 | orchestrator | 2025-09-29 06:14:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:50.992899 | orchestrator | 2025-09-29 06:14:50 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:50.994796 | orchestrator | 2025-09-29 06:14:50 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:50.996928 | orchestrator | 2025-09-29 06:14:50 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:50.996972 | orchestrator | 2025-09-29 06:14:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:54.070128 | orchestrator | 2025-09-29 06:14:54 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:54.071853 | orchestrator | 2025-09-29 06:14:54 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:54.072778 | orchestrator | 2025-09-29 06:14:54 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:54.072845 | orchestrator | 2025-09-29 06:14:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:14:57.115749 | orchestrator | 2025-09-29 06:14:57 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:14:57.117082 | orchestrator | 2025-09-29 06:14:57 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:14:57.118456 | orchestrator | 2025-09-29 06:14:57 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:14:57.118658 | orchestrator | 2025-09-29 06:14:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:00.178906 | orchestrator | 2025-09-29 06:15:00 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:00.184881 | orchestrator | 2025-09-29 06:15:00 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:00.186794 | orchestrator | 2025-09-29 06:15:00 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:00.187256 | orchestrator | 2025-09-29 06:15:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:03.245906 | orchestrator | 2025-09-29 06:15:03 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:03.249167 | orchestrator | 2025-09-29 06:15:03 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:03.250961 | orchestrator | 2025-09-29 06:15:03 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:03.251271 | orchestrator | 2025-09-29 06:15:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:06.283493 | orchestrator | 2025-09-29 06:15:06 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:06.285428 | orchestrator | 2025-09-29 06:15:06 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:06.285967 | orchestrator | 2025-09-29 06:15:06 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:06.286084 | orchestrator | 2025-09-29 06:15:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:09.332742 | orchestrator | 2025-09-29 06:15:09 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:09.333993 | orchestrator | 2025-09-29 06:15:09 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:09.335222 | orchestrator | 2025-09-29 06:15:09 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:09.335489 | orchestrator | 2025-09-29 06:15:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:12.371165 | orchestrator | 2025-09-29 06:15:12 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:12.372173 | orchestrator | 2025-09-29 06:15:12 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:12.373139 | orchestrator | 2025-09-29 06:15:12 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:12.373171 | orchestrator | 2025-09-29 06:15:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:15.419828 | orchestrator | 2025-09-29 06:15:15 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:15.422393 | orchestrator | 2025-09-29 06:15:15 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:15.423868 | orchestrator | 2025-09-29 06:15:15 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:15.423952 | orchestrator | 2025-09-29 06:15:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:18.459928 | orchestrator | 2025-09-29 06:15:18 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:18.460082 | orchestrator | 2025-09-29 06:15:18 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:18.461713 | orchestrator | 2025-09-29 06:15:18 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:18.461814 | orchestrator | 2025-09-29 06:15:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:21.492221 | orchestrator | 2025-09-29 06:15:21 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:21.493175 | orchestrator | 2025-09-29 06:15:21 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:21.494967 | orchestrator | 2025-09-29 06:15:21 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:21.495072 | orchestrator | 2025-09-29 06:15:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:24.541089 | orchestrator | 2025-09-29 06:15:24 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:24.542317 | orchestrator | 2025-09-29 06:15:24 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:24.543709 | orchestrator | 2025-09-29 06:15:24 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:24.543760 | orchestrator | 2025-09-29 06:15:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:27.583489 | orchestrator | 2025-09-29 06:15:27 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:27.585401 | orchestrator | 2025-09-29 06:15:27 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:27.587266 | orchestrator | 2025-09-29 06:15:27 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:27.587292 | orchestrator | 2025-09-29 06:15:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:30.629934 | orchestrator | 2025-09-29 06:15:30 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:30.632069 | orchestrator | 2025-09-29 06:15:30 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:30.633469 | orchestrator | 2025-09-29 06:15:30 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:30.633749 | orchestrator | 2025-09-29 06:15:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:33.673091 | orchestrator | 2025-09-29 06:15:33 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:33.675399 | orchestrator | 2025-09-29 06:15:33 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:33.676313 | orchestrator | 2025-09-29 06:15:33 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:33.676350 | orchestrator | 2025-09-29 06:15:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:36.726257 | orchestrator | 2025-09-29 06:15:36 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:36.726714 | orchestrator | 2025-09-29 06:15:36 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:36.728323 | orchestrator | 2025-09-29 06:15:36 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:36.728465 | orchestrator | 2025-09-29 06:15:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:39.777870 | orchestrator | 2025-09-29 06:15:39 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:39.779320 | orchestrator | 2025-09-29 06:15:39 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:39.782191 | orchestrator | 2025-09-29 06:15:39 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:39.782255 | orchestrator | 2025-09-29 06:15:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:42.818522 | orchestrator | 2025-09-29 06:15:42 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:42.819615 | orchestrator | 2025-09-29 06:15:42 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:42.820941 | orchestrator | 2025-09-29 06:15:42 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:42.821154 | orchestrator | 2025-09-29 06:15:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:45.868630 | orchestrator | 2025-09-29 06:15:45 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:45.870952 | orchestrator | 2025-09-29 06:15:45 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:45.876077 | orchestrator | 2025-09-29 06:15:45 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:45.876122 | orchestrator | 2025-09-29 06:15:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:48.921171 | orchestrator | 2025-09-29 06:15:48 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:48.922492 | orchestrator | 2025-09-29 06:15:48 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:48.924284 | orchestrator | 2025-09-29 06:15:48 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:48.924319 | orchestrator | 2025-09-29 06:15:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:51.973520 | orchestrator | 2025-09-29 06:15:51 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:51.974704 | orchestrator | 2025-09-29 06:15:51 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:51.976322 | orchestrator | 2025-09-29 06:15:51 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:51.976355 | orchestrator | 2025-09-29 06:15:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:55.020757 | orchestrator | 2025-09-29 06:15:55 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:55.022308 | orchestrator | 2025-09-29 06:15:55 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:55.026132 | orchestrator | 2025-09-29 06:15:55 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:55.026300 | orchestrator | 2025-09-29 06:15:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:15:58.064288 | orchestrator | 2025-09-29 06:15:58 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:15:58.065105 | orchestrator | 2025-09-29 06:15:58 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:15:58.065840 | orchestrator | 2025-09-29 06:15:58 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:15:58.065869 | orchestrator | 2025-09-29 06:15:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:01.108211 | orchestrator | 2025-09-29 06:16:01 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:01.109177 | orchestrator | 2025-09-29 06:16:01 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:01.110863 | orchestrator | 2025-09-29 06:16:01 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:01.110889 | orchestrator | 2025-09-29 06:16:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:04.144314 | orchestrator | 2025-09-29 06:16:04 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:04.144438 | orchestrator | 2025-09-29 06:16:04 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:04.145611 | orchestrator | 2025-09-29 06:16:04 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:04.145646 | orchestrator | 2025-09-29 06:16:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:07.193136 | orchestrator | 2025-09-29 06:16:07 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:07.194598 | orchestrator | 2025-09-29 06:16:07 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:07.196881 | orchestrator | 2025-09-29 06:16:07 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:07.196958 | orchestrator | 2025-09-29 06:16:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:10.243567 | orchestrator | 2025-09-29 06:16:10 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:10.244672 | orchestrator | 2025-09-29 06:16:10 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:10.246766 | orchestrator | 2025-09-29 06:16:10 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:10.246821 | orchestrator | 2025-09-29 06:16:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:13.297362 | orchestrator | 2025-09-29 06:16:13 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:13.299996 | orchestrator | 2025-09-29 06:16:13 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:13.301973 | orchestrator | 2025-09-29 06:16:13 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:13.302087 | orchestrator | 2025-09-29 06:16:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:16.346232 | orchestrator | 2025-09-29 06:16:16 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:16.347744 | orchestrator | 2025-09-29 06:16:16 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:16.352350 | orchestrator | 2025-09-29 06:16:16 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:16.352417 | orchestrator | 2025-09-29 06:16:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:19.405970 | orchestrator | 2025-09-29 06:16:19 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:19.407068 | orchestrator | 2025-09-29 06:16:19 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:19.408935 | orchestrator | 2025-09-29 06:16:19 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:19.408968 | orchestrator | 2025-09-29 06:16:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:22.451121 | orchestrator | 2025-09-29 06:16:22 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:22.452221 | orchestrator | 2025-09-29 06:16:22 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:22.453921 | orchestrator | 2025-09-29 06:16:22 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:22.453962 | orchestrator | 2025-09-29 06:16:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:25.509104 | orchestrator | 2025-09-29 06:16:25 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:25.511465 | orchestrator | 2025-09-29 06:16:25 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:25.513210 | orchestrator | 2025-09-29 06:16:25 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:25.513269 | orchestrator | 2025-09-29 06:16:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:28.566437 | orchestrator | 2025-09-29 06:16:28 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:28.569076 | orchestrator | 2025-09-29 06:16:28 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:28.569180 | orchestrator | 2025-09-29 06:16:28 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:28.569206 | orchestrator | 2025-09-29 06:16:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:31.621868 | orchestrator | 2025-09-29 06:16:31 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:31.624459 | orchestrator | 2025-09-29 06:16:31 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:31.626327 | orchestrator | 2025-09-29 06:16:31 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:31.626554 | orchestrator | 2025-09-29 06:16:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:34.674803 | orchestrator | 2025-09-29 06:16:34 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:34.676345 | orchestrator | 2025-09-29 06:16:34 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:34.678315 | orchestrator | 2025-09-29 06:16:34 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:34.678393 | orchestrator | 2025-09-29 06:16:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:37.722589 | orchestrator | 2025-09-29 06:16:37 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:37.723570 | orchestrator | 2025-09-29 06:16:37 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:37.725364 | orchestrator | 2025-09-29 06:16:37 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state STARTED 2025-09-29 06:16:37.725431 | orchestrator | 2025-09-29 06:16:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:40.781781 | orchestrator | 2025-09-29 06:16:40 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:40.782113 | orchestrator | 2025-09-29 06:16:40 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:40.783839 | orchestrator | 2025-09-29 06:16:40 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:40.789123 | orchestrator | 2025-09-29 06:16:40 | INFO  | Task 174635ef-6660-4ad3-8978-e7338445f93f is in state SUCCESS 2025-09-29 06:16:40.791272 | orchestrator | 2025-09-29 06:16:40.791320 | orchestrator | 2025-09-29 06:16:40.791333 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-29 06:16:40.791345 | orchestrator | 2025-09-29 06:16:40.791356 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-29 06:16:40.791368 | orchestrator | Monday 29 September 2025 06:06:09 +0000 (0:00:00.914) 0:00:00.914 ****** 2025-09-29 06:16:40.791381 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.791393 | orchestrator | 2025-09-29 06:16:40.791406 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-29 06:16:40.791417 | orchestrator | Monday 29 September 2025 06:06:10 +0000 (0:00:01.379) 0:00:02.294 ****** 2025-09-29 06:16:40.791428 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.791440 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.791450 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.791500 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.791542 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.791554 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.791564 | orchestrator | 2025-09-29 06:16:40.791575 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-29 06:16:40.791586 | orchestrator | Monday 29 September 2025 06:06:12 +0000 (0:00:01.573) 0:00:03.867 ****** 2025-09-29 06:16:40.791596 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.791607 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.791617 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.791672 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.791701 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.791730 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.791746 | orchestrator | 2025-09-29 06:16:40.791771 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-29 06:16:40.791782 | orchestrator | Monday 29 September 2025 06:06:13 +0000 (0:00:00.904) 0:00:04.772 ****** 2025-09-29 06:16:40.791799 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.791810 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.791820 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.791831 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.791841 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.791852 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.791870 | orchestrator | 2025-09-29 06:16:40.791884 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-29 06:16:40.791896 | orchestrator | Monday 29 September 2025 06:06:14 +0000 (0:00:00.795) 0:00:05.567 ****** 2025-09-29 06:16:40.791908 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.791921 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.791933 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.791952 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.791964 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.791976 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.791988 | orchestrator | 2025-09-29 06:16:40.792000 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-29 06:16:40.792020 | orchestrator | Monday 29 September 2025 06:06:14 +0000 (0:00:00.664) 0:00:06.232 ****** 2025-09-29 06:16:40.792038 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.792051 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.792064 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.792074 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.792085 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.792095 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.792105 | orchestrator | 2025-09-29 06:16:40.792116 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-29 06:16:40.792135 | orchestrator | Monday 29 September 2025 06:06:15 +0000 (0:00:00.472) 0:00:06.704 ****** 2025-09-29 06:16:40.792146 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.792164 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.792181 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.792198 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.792225 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.792244 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.792263 | orchestrator | 2025-09-29 06:16:40.792281 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-29 06:16:40.792306 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:00.869) 0:00:07.574 ****** 2025-09-29 06:16:40.792322 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.792341 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.792358 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.792391 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.792410 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.792444 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.792498 | orchestrator | 2025-09-29 06:16:40.792517 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-29 06:16:40.792534 | orchestrator | Monday 29 September 2025 06:06:16 +0000 (0:00:00.782) 0:00:08.356 ****** 2025-09-29 06:16:40.792585 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.792612 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.792630 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.792648 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.792666 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.792684 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.792724 | orchestrator | 2025-09-29 06:16:40.792741 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-29 06:16:40.792758 | orchestrator | Monday 29 September 2025 06:06:17 +0000 (0:00:00.804) 0:00:09.161 ****** 2025-09-29 06:16:40.792790 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.792818 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.792845 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.792863 | orchestrator | 2025-09-29 06:16:40.792881 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-29 06:16:40.792899 | orchestrator | Monday 29 September 2025 06:06:18 +0000 (0:00:00.780) 0:00:09.941 ****** 2025-09-29 06:16:40.792917 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.792940 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.792958 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.792976 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.792993 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.793020 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.793037 | orchestrator | 2025-09-29 06:16:40.793074 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-29 06:16:40.793091 | orchestrator | Monday 29 September 2025 06:06:20 +0000 (0:00:01.733) 0:00:11.674 ****** 2025-09-29 06:16:40.793109 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.793127 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.793145 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.793163 | orchestrator | 2025-09-29 06:16:40.793181 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-29 06:16:40.793199 | orchestrator | Monday 29 September 2025 06:06:23 +0000 (0:00:03.048) 0:00:14.723 ****** 2025-09-29 06:16:40.793217 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.793235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.793253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.793271 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.793288 | orchestrator | 2025-09-29 06:16:40.793306 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-29 06:16:40.793323 | orchestrator | Monday 29 September 2025 06:06:23 +0000 (0:00:00.609) 0:00:15.333 ****** 2025-09-29 06:16:40.793344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793376 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793396 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793413 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.793430 | orchestrator | 2025-09-29 06:16:40.793447 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-29 06:16:40.793491 | orchestrator | Monday 29 September 2025 06:06:24 +0000 (0:00:00.849) 0:00:16.183 ****** 2025-09-29 06:16:40.793527 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793586 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.793604 | orchestrator | 2025-09-29 06:16:40.793620 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-29 06:16:40.793644 | orchestrator | Monday 29 September 2025 06:06:24 +0000 (0:00:00.148) 0:00:16.331 ****** 2025-09-29 06:16:40.793665 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-29 06:06:20.995394', 'end': '2025-09-29 06:06:21.252724', 'delta': '0:00:00.257330', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-29 06:06:21.962712', 'end': '2025-09-29 06:06:22.215768', 'delta': '0:00:00.253056', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793733 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-29 06:06:22.865010', 'end': '2025-09-29 06:06:23.164537', 'delta': '0:00:00.299527', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.793751 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.793779 | orchestrator | 2025-09-29 06:16:40.793797 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-29 06:16:40.793832 | orchestrator | Monday 29 September 2025 06:06:25 +0000 (0:00:00.577) 0:00:16.909 ****** 2025-09-29 06:16:40.793850 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.793868 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.793886 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.793916 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.793935 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.793952 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.793979 | orchestrator | 2025-09-29 06:16:40.793990 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-29 06:16:40.794001 | orchestrator | Monday 29 September 2025 06:06:27 +0000 (0:00:02.209) 0:00:19.118 ****** 2025-09-29 06:16:40.794012 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.794067 | orchestrator | 2025-09-29 06:16:40.794080 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-29 06:16:40.794098 | orchestrator | Monday 29 September 2025 06:06:28 +0000 (0:00:01.032) 0:00:20.151 ****** 2025-09-29 06:16:40.794115 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794133 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794151 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794169 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794187 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794204 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.794222 | orchestrator | 2025-09-29 06:16:40.794241 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-29 06:16:40.794259 | orchestrator | Monday 29 September 2025 06:06:29 +0000 (0:00:01.073) 0:00:21.225 ****** 2025-09-29 06:16:40.794277 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794296 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794313 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794346 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794363 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.794382 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794401 | orchestrator | 2025-09-29 06:16:40.794418 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-29 06:16:40.794435 | orchestrator | Monday 29 September 2025 06:06:31 +0000 (0:00:01.622) 0:00:22.847 ****** 2025-09-29 06:16:40.794455 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794531 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794551 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794569 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794586 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794604 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.794621 | orchestrator | 2025-09-29 06:16:40.794638 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-29 06:16:40.794649 | orchestrator | Monday 29 September 2025 06:06:32 +0000 (0:00:00.874) 0:00:23.721 ****** 2025-09-29 06:16:40.794660 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794670 | orchestrator | 2025-09-29 06:16:40.794680 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-29 06:16:40.794691 | orchestrator | Monday 29 September 2025 06:06:32 +0000 (0:00:00.124) 0:00:23.846 ****** 2025-09-29 06:16:40.794702 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794712 | orchestrator | 2025-09-29 06:16:40.794723 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-29 06:16:40.794733 | orchestrator | Monday 29 September 2025 06:06:32 +0000 (0:00:00.194) 0:00:24.040 ****** 2025-09-29 06:16:40.794744 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794754 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794765 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794775 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794786 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794808 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.794818 | orchestrator | 2025-09-29 06:16:40.794829 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-29 06:16:40.794849 | orchestrator | Monday 29 September 2025 06:06:33 +0000 (0:00:00.717) 0:00:24.757 ****** 2025-09-29 06:16:40.794860 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794871 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794881 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794892 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794902 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794913 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.794923 | orchestrator | 2025-09-29 06:16:40.794933 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-29 06:16:40.794942 | orchestrator | Monday 29 September 2025 06:06:34 +0000 (0:00:01.079) 0:00:25.837 ****** 2025-09-29 06:16:40.794952 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.794961 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.794970 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.794980 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.794989 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.794998 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.795007 | orchestrator | 2025-09-29 06:16:40.795017 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-29 06:16:40.795026 | orchestrator | Monday 29 September 2025 06:06:35 +0000 (0:00:01.027) 0:00:26.864 ****** 2025-09-29 06:16:40.795035 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.795045 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.795054 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.795063 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.795073 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.795082 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.795091 | orchestrator | 2025-09-29 06:16:40.795101 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-29 06:16:40.795117 | orchestrator | Monday 29 September 2025 06:06:36 +0000 (0:00:00.897) 0:00:27.761 ****** 2025-09-29 06:16:40.795127 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.795136 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.795145 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.795154 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.795163 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.795173 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.795182 | orchestrator | 2025-09-29 06:16:40.795191 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-29 06:16:40.795201 | orchestrator | Monday 29 September 2025 06:06:37 +0000 (0:00:00.851) 0:00:28.613 ****** 2025-09-29 06:16:40.795210 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.795219 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.795228 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.795238 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.795247 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.795256 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.795266 | orchestrator | 2025-09-29 06:16:40.795275 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-29 06:16:40.795285 | orchestrator | Monday 29 September 2025 06:06:38 +0000 (0:00:01.101) 0:00:29.714 ****** 2025-09-29 06:16:40.795294 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.795303 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.795313 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.795322 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.795331 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.795341 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.795350 | orchestrator | 2025-09-29 06:16:40.795359 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-29 06:16:40.795375 | orchestrator | Monday 29 September 2025 06:06:38 +0000 (0:00:00.632) 0:00:30.347 ****** 2025-09-29 06:16:40.795386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part1', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part14', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part15', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part16', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795783 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.795795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795834 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.795868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.795950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78', 'dm-uuid-LVM-d8NZKwy7ftTse94wxkQnua72TKxupiytuYe05Wity3i14Qhl4VROCqD6knnOpqAB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176', 'dm-uuid-LVM-gbt4G8bLFnvTRoMGrRQv1WI2eQvndVYhJFVCvKStPk7a3I2lDgG5CRAphg1emVFQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.795994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796043 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.796060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tXc4sV-CyJx-xHZf-oWbW-W8Ro-lx7X-V1kqk3', 'scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc', 'scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a', 'dm-uuid-LVM-4wYSlljS0T5isP1TsPE4NfyE6gf8XLP3gnnp7iCeVETjfDMSauD4FYQuBuhttzAd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MIMPw5-trda-akOu-1E4D-MbC0-mKzE-Ri7y2c', 'scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f', 'scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a', 'scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca', 'dm-uuid-LVM-aHmAI4mSI4GUFXsGTUitW9CtCm0Sokn4urRKMuHj22aPNrbz6y4iTC4qHk29xAcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796192 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.796202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796228 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910', 'dm-uuid-LVM-s0RZBQmqqycgxl7e1JyPQJ20o6pfPZupBQeyEBzW1QjSysrvihySRmw78rfZYOSC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0', 'dm-uuid-LVM-wNcPumkRip1ZOpXlItEaf9IOEdsSKVCe0LasbViKWzx55fVH1GrLseZl3obgMGl5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ed0ddS-DppI-QaOd-7IaL-3t1j-CG8t-ctGImb', 'scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495', 'scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796361 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hev04r-oN11-kdP7-DYe0-VScV-6gkx-btEQdm', 'scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee', 'scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60', 'scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796434 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.796444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:16:40.796527 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pURzBD-dWYd-GrBi-KdcW-a30h-oPrL-2UTKtr', 'scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1', 'scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LE2qnG-mk7t-bolv-6CtS-Ai8F-43K4-bf1ZWy', 'scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330', 'scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5', 'scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:16:40.796597 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.796608 | orchestrator | 2025-09-29 06:16:40.796618 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-29 06:16:40.796628 | orchestrator | Monday 29 September 2025 06:06:40 +0000 (0:00:01.295) 0:00:31.642 ****** 2025-09-29 06:16:40.796638 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796652 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796663 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796673 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796683 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796692 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796714 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796725 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796741 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part1', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part14', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part15', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part16', 'scsi-SQEMU_QEMU_HARDDISK_91d6b097-cf49-4bc5-9189-b5fe273ac0cf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796757 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796773 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796784 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796804 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796814 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796823 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796840 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.796856 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796898 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796915 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part1', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part14', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part15', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part16', 'scsi-SQEMU_QEMU_HARDDISK_5dd06d1d-6cac-4d6a-b9ef-711b6dd82f96-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796927 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796949 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796960 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796974 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796984 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.796994 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797004 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797024 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797035 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797050 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcb9eb34-30b6-467b-99a0-70fbe86f795a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797061 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797080 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.797090 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.797107 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78', 'dm-uuid-LVM-d8NZKwy7ftTse94wxkQnua72TKxupiytuYe05Wity3i14Qhl4VROCqD6knnOpqAB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797123 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a', 'dm-uuid-LVM-4wYSlljS0T5isP1TsPE4NfyE6gf8XLP3gnnp7iCeVETjfDMSauD4FYQuBuhttzAd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca', 'dm-uuid-LVM-aHmAI4mSI4GUFXsGTUitW9CtCm0Sokn4urRKMuHj22aPNrbz6y4iTC4qHk29xAcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176', 'dm-uuid-LVM-gbt4G8bLFnvTRoMGrRQv1WI2eQvndVYhJFVCvKStPk7a3I2lDgG5CRAphg1emVFQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797159 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797174 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797199 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797220 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797230 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797422 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910', 'dm-uuid-LVM-s0RZBQmqqycgxl7e1JyPQJ20o6pfPZupBQeyEBzW1QjSysrvihySRmw78rfZYOSC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797482 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0', 'dm-uuid-LVM-wNcPumkRip1ZOpXlItEaf9IOEdsSKVCe0LasbViKWzx55fVH1GrLseZl3obgMGl5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797524 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797541 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797551 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797566 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797576 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797602 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797611 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797648 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797697 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797709 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ed0ddS-DppI-QaOd-7IaL-3t1j-CG8t-ctGImb', 'scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495', 'scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pURzBD-dWYd-GrBi-KdcW-a30h-oPrL-2UTKtr', 'scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1', 'scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LE2qnG-mk7t-bolv-6CtS-Ai8F-43K4-bf1ZWy', 'scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330', 'scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5', 'scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hev04r-oN11-kdP7-DYe0-VScV-6gkx-btEQdm', 'scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee', 'scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797820 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.797838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60', 'scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tXc4sV-CyJx-xHZf-oWbW-W8Ro-lx7X-V1kqk3', 'scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc', 'scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MIMPw5-trda-akOu-1E4D-MbC0-mKzE-Ri7y2c', 'scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f', 'scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a', 'scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797922 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.797932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:16:40.797942 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.797951 | orchestrator | 2025-09-29 06:16:40.797961 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-29 06:16:40.797971 | orchestrator | Monday 29 September 2025 06:06:41 +0000 (0:00:01.586) 0:00:33.229 ****** 2025-09-29 06:16:40.797981 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.797990 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.798000 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.798014 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.798080 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.798092 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.798103 | orchestrator | 2025-09-29 06:16:40.798114 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-29 06:16:40.798125 | orchestrator | Monday 29 September 2025 06:06:42 +0000 (0:00:01.178) 0:00:34.407 ****** 2025-09-29 06:16:40.798136 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.798146 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.798157 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.798168 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.798178 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.798189 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.798200 | orchestrator | 2025-09-29 06:16:40.798211 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-29 06:16:40.798222 | orchestrator | Monday 29 September 2025 06:06:43 +0000 (0:00:00.668) 0:00:35.075 ****** 2025-09-29 06:16:40.798233 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.798244 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.798254 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.798265 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.798276 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.798294 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.798305 | orchestrator | 2025-09-29 06:16:40.798315 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-29 06:16:40.798327 | orchestrator | Monday 29 September 2025 06:06:44 +0000 (0:00:01.160) 0:00:36.236 ****** 2025-09-29 06:16:40.798336 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.798346 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.798355 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.798365 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.798379 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.798389 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.798398 | orchestrator | 2025-09-29 06:16:40.798415 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-29 06:16:40.798424 | orchestrator | Monday 29 September 2025 06:06:45 +0000 (0:00:00.573) 0:00:36.809 ****** 2025-09-29 06:16:40.798434 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.798443 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.798452 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.798517 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.798527 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.798537 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.798546 | orchestrator | 2025-09-29 06:16:40.798556 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-29 06:16:40.798565 | orchestrator | Monday 29 September 2025 06:06:46 +0000 (0:00:01.379) 0:00:38.188 ****** 2025-09-29 06:16:40.798575 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.798584 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.798594 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.798603 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.798613 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.798622 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.798631 | orchestrator | 2025-09-29 06:16:40.798639 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-29 06:16:40.798647 | orchestrator | Monday 29 September 2025 06:06:47 +0000 (0:00:00.620) 0:00:38.809 ****** 2025-09-29 06:16:40.798655 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.798663 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-29 06:16:40.798671 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-29 06:16:40.798679 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-29 06:16:40.798687 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-29 06:16:40.798694 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-29 06:16:40.798702 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-29 06:16:40.798710 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-29 06:16:40.798718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-29 06:16:40.798725 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-29 06:16:40.798733 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-29 06:16:40.798740 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-29 06:16:40.798748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-29 06:16:40.798756 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-29 06:16:40.798764 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-29 06:16:40.798771 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-29 06:16:40.798779 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-29 06:16:40.798786 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-29 06:16:40.798794 | orchestrator | 2025-09-29 06:16:40.798802 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-29 06:16:40.798810 | orchestrator | Monday 29 September 2025 06:06:51 +0000 (0:00:04.028) 0:00:42.837 ****** 2025-09-29 06:16:40.798818 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.798832 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.798840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.798847 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-29 06:16:40.798855 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-29 06:16:40.798862 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-29 06:16:40.798870 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.798878 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-29 06:16:40.798886 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-29 06:16:40.798893 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-29 06:16:40.798901 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.798909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-29 06:16:40.798932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-29 06:16:40.798946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-29 06:16:40.798959 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.798971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-29 06:16:40.798983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-29 06:16:40.798994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-29 06:16:40.799006 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799018 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.799030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-29 06:16:40.799043 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-29 06:16:40.799056 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-29 06:16:40.799068 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.799081 | orchestrator | 2025-09-29 06:16:40.799093 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-29 06:16:40.799106 | orchestrator | Monday 29 September 2025 06:06:52 +0000 (0:00:00.774) 0:00:43.612 ****** 2025-09-29 06:16:40.799119 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.799131 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.799142 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.799155 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.799168 | orchestrator | 2025-09-29 06:16:40.799189 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-29 06:16:40.799199 | orchestrator | Monday 29 September 2025 06:06:53 +0000 (0:00:01.362) 0:00:44.975 ****** 2025-09-29 06:16:40.799206 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799214 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.799221 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.799229 | orchestrator | 2025-09-29 06:16:40.799237 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-29 06:16:40.799244 | orchestrator | Monday 29 September 2025 06:06:54 +0000 (0:00:00.505) 0:00:45.480 ****** 2025-09-29 06:16:40.799252 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799259 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.799267 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.799274 | orchestrator | 2025-09-29 06:16:40.799282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-29 06:16:40.799290 | orchestrator | Monday 29 September 2025 06:06:54 +0000 (0:00:00.451) 0:00:45.931 ****** 2025-09-29 06:16:40.799297 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799305 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.799313 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.799320 | orchestrator | 2025-09-29 06:16:40.799336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-29 06:16:40.799344 | orchestrator | Monday 29 September 2025 06:06:54 +0000 (0:00:00.408) 0:00:46.340 ****** 2025-09-29 06:16:40.799351 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.799359 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.799367 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.799374 | orchestrator | 2025-09-29 06:16:40.799382 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-29 06:16:40.799390 | orchestrator | Monday 29 September 2025 06:06:56 +0000 (0:00:01.191) 0:00:47.532 ****** 2025-09-29 06:16:40.799397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.799405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.799412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.799420 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799428 | orchestrator | 2025-09-29 06:16:40.799435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-29 06:16:40.799443 | orchestrator | Monday 29 September 2025 06:06:56 +0000 (0:00:00.453) 0:00:47.985 ****** 2025-09-29 06:16:40.799450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.799484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.799492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.799499 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799507 | orchestrator | 2025-09-29 06:16:40.799515 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-29 06:16:40.799522 | orchestrator | Monday 29 September 2025 06:06:57 +0000 (0:00:00.496) 0:00:48.481 ****** 2025-09-29 06:16:40.799530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.799538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.799546 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.799553 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799561 | orchestrator | 2025-09-29 06:16:40.799568 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-29 06:16:40.799576 | orchestrator | Monday 29 September 2025 06:06:57 +0000 (0:00:00.868) 0:00:49.350 ****** 2025-09-29 06:16:40.799584 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.799591 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.799599 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.799607 | orchestrator | 2025-09-29 06:16:40.799615 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-29 06:16:40.799622 | orchestrator | Monday 29 September 2025 06:06:58 +0000 (0:00:00.353) 0:00:49.704 ****** 2025-09-29 06:16:40.799630 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-29 06:16:40.799638 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-29 06:16:40.799645 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-29 06:16:40.799653 | orchestrator | 2025-09-29 06:16:40.799661 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-29 06:16:40.799669 | orchestrator | Monday 29 September 2025 06:06:59 +0000 (0:00:00.894) 0:00:50.599 ****** 2025-09-29 06:16:40.799685 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.799693 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.799701 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.799709 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-29 06:16:40.799717 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-29 06:16:40.799724 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-29 06:16:40.799732 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-29 06:16:40.799745 | orchestrator | 2025-09-29 06:16:40.799753 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-29 06:16:40.799761 | orchestrator | Monday 29 September 2025 06:07:00 +0000 (0:00:01.288) 0:00:51.887 ****** 2025-09-29 06:16:40.799768 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.799776 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.799784 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.799791 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-09-29 06:16:40.799803 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-29 06:16:40.799811 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-29 06:16:40.799819 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-29 06:16:40.799826 | orchestrator | 2025-09-29 06:16:40.799834 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.799842 | orchestrator | Monday 29 September 2025 06:07:02 +0000 (0:00:02.278) 0:00:54.165 ****** 2025-09-29 06:16:40.799850 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.799860 | orchestrator | 2025-09-29 06:16:40.799867 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.799875 | orchestrator | Monday 29 September 2025 06:07:04 +0000 (0:00:01.817) 0:00:55.983 ****** 2025-09-29 06:16:40.799883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-29 06:16:40.799891 | orchestrator | 2025-09-29 06:16:40.799899 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.799907 | orchestrator | Monday 29 September 2025 06:07:06 +0000 (0:00:01.553) 0:00:57.537 ****** 2025-09-29 06:16:40.799920 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.799934 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.799946 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.799959 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.799973 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.799986 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.799999 | orchestrator | 2025-09-29 06:16:40.800012 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.800025 | orchestrator | Monday 29 September 2025 06:07:07 +0000 (0:00:01.145) 0:00:58.682 ****** 2025-09-29 06:16:40.800039 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800055 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800069 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800082 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.800097 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.800115 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.800129 | orchestrator | 2025-09-29 06:16:40.800144 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.800158 | orchestrator | Monday 29 September 2025 06:07:08 +0000 (0:00:01.678) 0:01:00.360 ****** 2025-09-29 06:16:40.800171 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800184 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800197 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800209 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.800222 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.800237 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.800251 | orchestrator | 2025-09-29 06:16:40.800265 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.800279 | orchestrator | Monday 29 September 2025 06:07:10 +0000 (0:00:02.060) 0:01:02.421 ****** 2025-09-29 06:16:40.800303 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800312 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800319 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800327 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.800334 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.800342 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.800349 | orchestrator | 2025-09-29 06:16:40.800357 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.800365 | orchestrator | Monday 29 September 2025 06:07:12 +0000 (0:00:01.221) 0:01:03.643 ****** 2025-09-29 06:16:40.800372 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.800380 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.800388 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.800395 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.800403 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.800410 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.800418 | orchestrator | 2025-09-29 06:16:40.800425 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.800433 | orchestrator | Monday 29 September 2025 06:07:12 +0000 (0:00:00.680) 0:01:04.323 ****** 2025-09-29 06:16:40.800449 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800483 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800493 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800501 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.800509 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.800516 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.800524 | orchestrator | 2025-09-29 06:16:40.800532 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.800540 | orchestrator | Monday 29 September 2025 06:07:13 +0000 (0:00:00.718) 0:01:05.042 ****** 2025-09-29 06:16:40.800547 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800555 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800563 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800570 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.800578 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.800586 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.800593 | orchestrator | 2025-09-29 06:16:40.800601 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.800609 | orchestrator | Monday 29 September 2025 06:07:14 +0000 (0:00:00.746) 0:01:05.788 ****** 2025-09-29 06:16:40.800622 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.800635 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.800646 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.800660 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.800673 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.800686 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.800699 | orchestrator | 2025-09-29 06:16:40.800713 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.800727 | orchestrator | Monday 29 September 2025 06:07:15 +0000 (0:00:01.258) 0:01:07.047 ****** 2025-09-29 06:16:40.800739 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.800759 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.800770 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.800781 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.800793 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.800806 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.800819 | orchestrator | 2025-09-29 06:16:40.800832 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.800846 | orchestrator | Monday 29 September 2025 06:07:16 +0000 (0:00:01.144) 0:01:08.191 ****** 2025-09-29 06:16:40.800860 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.800868 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.800876 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.800884 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.800899 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.800907 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.800915 | orchestrator | 2025-09-29 06:16:40.800922 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.800930 | orchestrator | Monday 29 September 2025 06:07:17 +0000 (0:00:00.916) 0:01:09.107 ****** 2025-09-29 06:16:40.800937 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.800945 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.800953 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.800960 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.800968 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.800975 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.800983 | orchestrator | 2025-09-29 06:16:40.800990 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.800998 | orchestrator | Monday 29 September 2025 06:07:18 +0000 (0:00:00.585) 0:01:09.693 ****** 2025-09-29 06:16:40.801005 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801013 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801020 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801028 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.801035 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.801043 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.801050 | orchestrator | 2025-09-29 06:16:40.801058 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.801066 | orchestrator | Monday 29 September 2025 06:07:18 +0000 (0:00:00.681) 0:01:10.374 ****** 2025-09-29 06:16:40.801074 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801081 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801089 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801096 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.801104 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.801111 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.801119 | orchestrator | 2025-09-29 06:16:40.801127 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.801134 | orchestrator | Monday 29 September 2025 06:07:19 +0000 (0:00:00.642) 0:01:11.017 ****** 2025-09-29 06:16:40.801142 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801150 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801157 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801165 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.801172 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.801180 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.801187 | orchestrator | 2025-09-29 06:16:40.801195 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.801203 | orchestrator | Monday 29 September 2025 06:07:20 +0000 (0:00:00.796) 0:01:11.813 ****** 2025-09-29 06:16:40.801210 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801218 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801225 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801233 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.801240 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.801248 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.801255 | orchestrator | 2025-09-29 06:16:40.801263 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.801270 | orchestrator | Monday 29 September 2025 06:07:20 +0000 (0:00:00.598) 0:01:12.411 ****** 2025-09-29 06:16:40.801278 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801285 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801293 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801300 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.801308 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.801315 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.801323 | orchestrator | 2025-09-29 06:16:40.801330 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.801350 | orchestrator | Monday 29 September 2025 06:07:21 +0000 (0:00:00.826) 0:01:13.238 ****** 2025-09-29 06:16:40.801359 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.801366 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.801374 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.801381 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.801389 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.801396 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.801404 | orchestrator | 2025-09-29 06:16:40.801411 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.801419 | orchestrator | Monday 29 September 2025 06:07:22 +0000 (0:00:00.759) 0:01:13.997 ****** 2025-09-29 06:16:40.801427 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.801434 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.801442 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.801449 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.801504 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.801514 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.801522 | orchestrator | 2025-09-29 06:16:40.801530 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.801538 | orchestrator | Monday 29 September 2025 06:07:23 +0000 (0:00:00.692) 0:01:14.690 ****** 2025-09-29 06:16:40.801551 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.801563 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.801576 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.801587 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.801597 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.801609 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.801621 | orchestrator | 2025-09-29 06:16:40.801633 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-29 06:16:40.801644 | orchestrator | Monday 29 September 2025 06:07:24 +0000 (0:00:01.087) 0:01:15.778 ****** 2025-09-29 06:16:40.801655 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.801677 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.801686 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.801697 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.801707 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.801717 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.801728 | orchestrator | 2025-09-29 06:16:40.801739 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-29 06:16:40.801749 | orchestrator | Monday 29 September 2025 06:07:25 +0000 (0:00:01.544) 0:01:17.322 ****** 2025-09-29 06:16:40.801761 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.801772 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.801783 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.801795 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.801806 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.801818 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.801829 | orchestrator | 2025-09-29 06:16:40.801841 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-29 06:16:40.801853 | orchestrator | Monday 29 September 2025 06:07:27 +0000 (0:00:02.029) 0:01:19.351 ****** 2025-09-29 06:16:40.801865 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.801878 | orchestrator | 2025-09-29 06:16:40.801892 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-29 06:16:40.801906 | orchestrator | Monday 29 September 2025 06:07:28 +0000 (0:00:01.050) 0:01:20.402 ****** 2025-09-29 06:16:40.801917 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.801929 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.801941 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.801952 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.801963 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.801981 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.801990 | orchestrator | 2025-09-29 06:16:40.801999 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-29 06:16:40.802009 | orchestrator | Monday 29 September 2025 06:07:29 +0000 (0:00:00.508) 0:01:20.910 ****** 2025-09-29 06:16:40.802056 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.802070 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.802084 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.802096 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.802108 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.802118 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.802128 | orchestrator | 2025-09-29 06:16:40.802139 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-29 06:16:40.802151 | orchestrator | Monday 29 September 2025 06:07:30 +0000 (0:00:00.683) 0:01:21.594 ****** 2025-09-29 06:16:40.802163 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802176 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802188 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802199 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802212 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802223 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-29 06:16:40.802232 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802244 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802255 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802265 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802274 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802285 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-29 06:16:40.802297 | orchestrator | 2025-09-29 06:16:40.802324 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-29 06:16:40.802337 | orchestrator | Monday 29 September 2025 06:07:31 +0000 (0:00:01.277) 0:01:22.871 ****** 2025-09-29 06:16:40.802349 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.802361 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.802371 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.802383 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.802395 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.802407 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.802418 | orchestrator | 2025-09-29 06:16:40.802430 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-29 06:16:40.802442 | orchestrator | Monday 29 September 2025 06:07:32 +0000 (0:00:01.186) 0:01:24.058 ****** 2025-09-29 06:16:40.802453 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.802489 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.802500 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.802511 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.802523 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.802535 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.802546 | orchestrator | 2025-09-29 06:16:40.802558 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-29 06:16:40.802570 | orchestrator | Monday 29 September 2025 06:07:33 +0000 (0:00:00.587) 0:01:24.646 ****** 2025-09-29 06:16:40.802581 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.802592 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.802616 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.802627 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.802638 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.802649 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.802659 | orchestrator | 2025-09-29 06:16:40.802672 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-29 06:16:40.802679 | orchestrator | Monday 29 September 2025 06:07:33 +0000 (0:00:00.772) 0:01:25.418 ****** 2025-09-29 06:16:40.802685 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.802692 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.802698 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.802704 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.802711 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.802717 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.802724 | orchestrator | 2025-09-29 06:16:40.802730 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-29 06:16:40.802737 | orchestrator | Monday 29 September 2025 06:07:34 +0000 (0:00:00.597) 0:01:26.015 ****** 2025-09-29 06:16:40.802743 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.802750 | orchestrator | 2025-09-29 06:16:40.802757 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-29 06:16:40.802763 | orchestrator | Monday 29 September 2025 06:07:35 +0000 (0:00:01.204) 0:01:27.220 ****** 2025-09-29 06:16:40.802770 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.802776 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.802783 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.802789 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.802796 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.802802 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.802809 | orchestrator | 2025-09-29 06:16:40.802815 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-29 06:16:40.802822 | orchestrator | Monday 29 September 2025 06:08:25 +0000 (0:00:49.422) 0:02:16.642 ****** 2025-09-29 06:16:40.802828 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802835 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802842 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802848 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.802854 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802861 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802867 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802874 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.802880 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802887 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802893 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802900 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.802906 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802912 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802919 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802925 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.802932 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802938 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802945 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802956 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.802963 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-29 06:16:40.802969 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-29 06:16:40.802976 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-29 06:16:40.802989 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.802998 | orchestrator | 2025-09-29 06:16:40.803009 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-29 06:16:40.803021 | orchestrator | Monday 29 September 2025 06:08:25 +0000 (0:00:00.703) 0:02:17.346 ****** 2025-09-29 06:16:40.803032 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803043 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803055 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803067 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803079 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803090 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803101 | orchestrator | 2025-09-29 06:16:40.803111 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-29 06:16:40.803121 | orchestrator | Monday 29 September 2025 06:08:26 +0000 (0:00:00.583) 0:02:17.930 ****** 2025-09-29 06:16:40.803132 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803143 | orchestrator | 2025-09-29 06:16:40.803150 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-29 06:16:40.803157 | orchestrator | Monday 29 September 2025 06:08:26 +0000 (0:00:00.269) 0:02:18.199 ****** 2025-09-29 06:16:40.803163 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803170 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803176 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803182 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803189 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803195 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803201 | orchestrator | 2025-09-29 06:16:40.803208 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-29 06:16:40.803219 | orchestrator | Monday 29 September 2025 06:08:27 +0000 (0:00:00.533) 0:02:18.732 ****** 2025-09-29 06:16:40.803226 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803232 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803239 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803245 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803251 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803258 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803264 | orchestrator | 2025-09-29 06:16:40.803271 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-29 06:16:40.803277 | orchestrator | Monday 29 September 2025 06:08:27 +0000 (0:00:00.654) 0:02:19.386 ****** 2025-09-29 06:16:40.803283 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803294 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803305 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803317 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803328 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803339 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803351 | orchestrator | 2025-09-29 06:16:40.803363 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-29 06:16:40.803375 | orchestrator | Monday 29 September 2025 06:08:28 +0000 (0:00:00.580) 0:02:19.967 ****** 2025-09-29 06:16:40.803385 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.803396 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.803407 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.803413 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.803420 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.803426 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.803438 | orchestrator | 2025-09-29 06:16:40.803445 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-29 06:16:40.803452 | orchestrator | Monday 29 September 2025 06:08:30 +0000 (0:00:02.274) 0:02:22.242 ****** 2025-09-29 06:16:40.803475 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.803483 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.803489 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.803496 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.803502 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.803508 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.803515 | orchestrator | 2025-09-29 06:16:40.803524 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-29 06:16:40.803534 | orchestrator | Monday 29 September 2025 06:08:31 +0000 (0:00:00.456) 0:02:22.699 ****** 2025-09-29 06:16:40.803546 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.803559 | orchestrator | 2025-09-29 06:16:40.803571 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-29 06:16:40.803583 | orchestrator | Monday 29 September 2025 06:08:32 +0000 (0:00:00.875) 0:02:23.574 ****** 2025-09-29 06:16:40.803593 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803604 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803612 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803619 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803625 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803632 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803638 | orchestrator | 2025-09-29 06:16:40.803645 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-29 06:16:40.803651 | orchestrator | Monday 29 September 2025 06:08:32 +0000 (0:00:00.482) 0:02:24.057 ****** 2025-09-29 06:16:40.803658 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803664 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803671 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803677 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803684 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803690 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803697 | orchestrator | 2025-09-29 06:16:40.803703 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-29 06:16:40.803710 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:00.654) 0:02:24.711 ****** 2025-09-29 06:16:40.803717 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803727 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803738 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803749 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803760 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803771 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803782 | orchestrator | 2025-09-29 06:16:40.803793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-29 06:16:40.803813 | orchestrator | Monday 29 September 2025 06:08:33 +0000 (0:00:00.467) 0:02:25.178 ****** 2025-09-29 06:16:40.803825 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803837 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803849 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803860 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803871 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803882 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803893 | orchestrator | 2025-09-29 06:16:40.803905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-29 06:16:40.803917 | orchestrator | Monday 29 September 2025 06:08:34 +0000 (0:00:00.928) 0:02:26.106 ****** 2025-09-29 06:16:40.803928 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.803938 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.803950 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.803963 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.803970 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.803976 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.803982 | orchestrator | 2025-09-29 06:16:40.803989 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-29 06:16:40.803995 | orchestrator | Monday 29 September 2025 06:08:35 +0000 (0:00:00.563) 0:02:26.670 ****** 2025-09-29 06:16:40.804004 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.804015 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.804026 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.804037 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.804049 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.804061 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.804072 | orchestrator | 2025-09-29 06:16:40.804089 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-29 06:16:40.804100 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:00.813) 0:02:27.483 ****** 2025-09-29 06:16:40.804112 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.804119 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.804126 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.804132 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.804139 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.804145 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.804152 | orchestrator | 2025-09-29 06:16:40.804158 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-29 06:16:40.804165 | orchestrator | Monday 29 September 2025 06:08:36 +0000 (0:00:00.634) 0:02:28.118 ****** 2025-09-29 06:16:40.804171 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.804177 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.804184 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.804190 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.804196 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.804203 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.804209 | orchestrator | 2025-09-29 06:16:40.804216 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-29 06:16:40.804222 | orchestrator | Monday 29 September 2025 06:08:37 +0000 (0:00:00.657) 0:02:28.775 ****** 2025-09-29 06:16:40.804229 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.804235 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.804242 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.804248 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.804254 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.804261 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.804267 | orchestrator | 2025-09-29 06:16:40.804274 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-29 06:16:40.804280 | orchestrator | Monday 29 September 2025 06:08:38 +0000 (0:00:01.147) 0:02:29.922 ****** 2025-09-29 06:16:40.804287 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.804294 | orchestrator | 2025-09-29 06:16:40.804301 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-29 06:16:40.804307 | orchestrator | Monday 29 September 2025 06:08:39 +0000 (0:00:01.056) 0:02:30.979 ****** 2025-09-29 06:16:40.804314 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-29 06:16:40.804321 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-29 06:16:40.804327 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-29 06:16:40.804334 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-29 06:16:40.804340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804347 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-29 06:16:40.804353 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-29 06:16:40.804365 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804372 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804385 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804391 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804398 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804404 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-29 06:16:40.804411 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804422 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804433 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804444 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804455 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804621 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-29 06:16:40.804629 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804645 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804652 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804658 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804665 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-29 06:16:40.804678 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804684 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804691 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804704 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804710 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-29 06:16:40.804716 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804723 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804730 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804736 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804742 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804749 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-29 06:16:40.804768 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804775 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804781 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804787 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804794 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-29 06:16:40.804800 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804807 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804814 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804820 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804826 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-29 06:16:40.804846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804853 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804860 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804866 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804873 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804879 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-29 06:16:40.804886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804892 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804898 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804905 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-29 06:16:40.804924 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804937 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804943 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.804950 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.804956 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-29 06:16:40.804963 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.804969 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.804976 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.804982 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.804988 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.804994 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-29 06:16:40.805000 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.805006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.805012 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.805018 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805024 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805034 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-29 06:16:40.805040 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805052 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-29 06:16:40.805058 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805064 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-29 06:16:40.805071 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-29 06:16:40.805077 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-29 06:16:40.805083 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-29 06:16:40.805089 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-29 06:16:40.805095 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-29 06:16:40.805105 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-29 06:16:40.805112 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-29 06:16:40.805118 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-29 06:16:40.805124 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-29 06:16:40.805130 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-29 06:16:40.805136 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-29 06:16:40.805142 | orchestrator | 2025-09-29 06:16:40.805151 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-29 06:16:40.805157 | orchestrator | Monday 29 September 2025 06:08:46 +0000 (0:00:06.933) 0:02:37.913 ****** 2025-09-29 06:16:40.805163 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805169 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805175 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805182 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.805188 | orchestrator | 2025-09-29 06:16:40.805194 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-29 06:16:40.805200 | orchestrator | Monday 29 September 2025 06:08:47 +0000 (0:00:00.962) 0:02:38.875 ****** 2025-09-29 06:16:40.805206 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805212 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805219 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805225 | orchestrator | 2025-09-29 06:16:40.805231 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-29 06:16:40.805237 | orchestrator | Monday 29 September 2025 06:08:48 +0000 (0:00:00.877) 0:02:39.753 ****** 2025-09-29 06:16:40.805243 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805249 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805255 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.805262 | orchestrator | 2025-09-29 06:16:40.805268 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-29 06:16:40.805274 | orchestrator | Monday 29 September 2025 06:08:49 +0000 (0:00:01.558) 0:02:41.312 ****** 2025-09-29 06:16:40.805280 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805286 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805292 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805298 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.805304 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.805310 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.805316 | orchestrator | 2025-09-29 06:16:40.805322 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-29 06:16:40.805328 | orchestrator | Monday 29 September 2025 06:08:50 +0000 (0:00:00.756) 0:02:42.068 ****** 2025-09-29 06:16:40.805334 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805340 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805346 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805352 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.805358 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.805364 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.805370 | orchestrator | 2025-09-29 06:16:40.805376 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-29 06:16:40.805390 | orchestrator | Monday 29 September 2025 06:08:51 +0000 (0:00:00.967) 0:02:43.036 ****** 2025-09-29 06:16:40.805396 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805402 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805408 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805414 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805420 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805426 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805432 | orchestrator | 2025-09-29 06:16:40.805438 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-29 06:16:40.805445 | orchestrator | Monday 29 September 2025 06:08:52 +0000 (0:00:00.582) 0:02:43.618 ****** 2025-09-29 06:16:40.805451 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805493 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805505 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805512 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805518 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805524 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805530 | orchestrator | 2025-09-29 06:16:40.805536 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-29 06:16:40.805542 | orchestrator | Monday 29 September 2025 06:08:52 +0000 (0:00:00.515) 0:02:44.134 ****** 2025-09-29 06:16:40.805548 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805554 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805560 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805565 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805572 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805578 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805584 | orchestrator | 2025-09-29 06:16:40.805590 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-29 06:16:40.805596 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:00.701) 0:02:44.835 ****** 2025-09-29 06:16:40.805602 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805608 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805614 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805620 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805626 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805632 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805638 | orchestrator | 2025-09-29 06:16:40.805644 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-29 06:16:40.805650 | orchestrator | Monday 29 September 2025 06:08:53 +0000 (0:00:00.555) 0:02:45.391 ****** 2025-09-29 06:16:40.805656 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805669 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805675 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805681 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805687 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805693 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805699 | orchestrator | 2025-09-29 06:16:40.805705 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-29 06:16:40.805712 | orchestrator | Monday 29 September 2025 06:08:54 +0000 (0:00:00.769) 0:02:46.161 ****** 2025-09-29 06:16:40.805718 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805724 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805730 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805736 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805742 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805748 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805754 | orchestrator | 2025-09-29 06:16:40.805760 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-29 06:16:40.805766 | orchestrator | Monday 29 September 2025 06:08:55 +0000 (0:00:00.604) 0:02:46.765 ****** 2025-09-29 06:16:40.805777 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805783 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805789 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805795 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.805801 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.805807 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.805813 | orchestrator | 2025-09-29 06:16:40.805819 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-29 06:16:40.805825 | orchestrator | Monday 29 September 2025 06:08:59 +0000 (0:00:03.752) 0:02:50.518 ****** 2025-09-29 06:16:40.805831 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805837 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805843 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805849 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.805855 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.805861 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.805867 | orchestrator | 2025-09-29 06:16:40.805873 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-29 06:16:40.805880 | orchestrator | Monday 29 September 2025 06:09:00 +0000 (0:00:00.950) 0:02:51.468 ****** 2025-09-29 06:16:40.805886 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805892 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805898 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805904 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.805910 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.805916 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.805922 | orchestrator | 2025-09-29 06:16:40.805928 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-29 06:16:40.805934 | orchestrator | Monday 29 September 2025 06:09:01 +0000 (0:00:01.218) 0:02:52.686 ****** 2025-09-29 06:16:40.805940 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.805946 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.805952 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.805958 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.805964 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.805970 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.805976 | orchestrator | 2025-09-29 06:16:40.805982 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-29 06:16:40.805988 | orchestrator | Monday 29 September 2025 06:09:01 +0000 (0:00:00.708) 0:02:53.395 ****** 2025-09-29 06:16:40.805994 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806000 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806006 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806013 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.806044 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.806050 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.806056 | orchestrator | 2025-09-29 06:16:40.806063 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-29 06:16:40.806074 | orchestrator | Monday 29 September 2025 06:09:02 +0000 (0:00:01.009) 0:02:54.404 ****** 2025-09-29 06:16:40.806081 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806087 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806093 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806101 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-29 06:16:40.806115 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-29 06:16:40.806123 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806129 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-29 06:16:40.806139 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-29 06:16:40.806145 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806151 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-29 06:16:40.806158 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-29 06:16:40.806164 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806170 | orchestrator | 2025-09-29 06:16:40.806176 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-29 06:16:40.806182 | orchestrator | Monday 29 September 2025 06:09:03 +0000 (0:00:00.711) 0:02:55.116 ****** 2025-09-29 06:16:40.806188 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806194 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806200 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806206 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806212 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806218 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806224 | orchestrator | 2025-09-29 06:16:40.806230 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-29 06:16:40.806236 | orchestrator | Monday 29 September 2025 06:09:04 +0000 (0:00:00.767) 0:02:55.883 ****** 2025-09-29 06:16:40.806242 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806248 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806254 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806260 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806266 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806272 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806278 | orchestrator | 2025-09-29 06:16:40.806284 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-29 06:16:40.806290 | orchestrator | Monday 29 September 2025 06:09:04 +0000 (0:00:00.528) 0:02:56.412 ****** 2025-09-29 06:16:40.806296 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806302 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806308 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806314 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806320 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806326 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806332 | orchestrator | 2025-09-29 06:16:40.806338 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-29 06:16:40.806344 | orchestrator | Monday 29 September 2025 06:09:05 +0000 (0:00:00.701) 0:02:57.114 ****** 2025-09-29 06:16:40.806354 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806360 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806366 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806372 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806378 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806384 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806390 | orchestrator | 2025-09-29 06:16:40.806396 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-29 06:16:40.806402 | orchestrator | Monday 29 September 2025 06:09:06 +0000 (0:00:00.859) 0:02:57.973 ****** 2025-09-29 06:16:40.806408 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806414 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806420 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806438 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806444 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806450 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806472 | orchestrator | 2025-09-29 06:16:40.806480 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-29 06:16:40.806486 | orchestrator | Monday 29 September 2025 06:09:07 +0000 (0:00:00.884) 0:02:58.858 ****** 2025-09-29 06:16:40.806492 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806498 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806504 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806510 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.806516 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.806522 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.806528 | orchestrator | 2025-09-29 06:16:40.806534 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-29 06:16:40.806541 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.823) 0:02:59.681 ****** 2025-09-29 06:16:40.806547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-29 06:16:40.806553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-29 06:16:40.806559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-29 06:16:40.806565 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806570 | orchestrator | 2025-09-29 06:16:40.806577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-29 06:16:40.806583 | orchestrator | Monday 29 September 2025 06:09:08 +0000 (0:00:00.513) 0:03:00.195 ****** 2025-09-29 06:16:40.806589 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-29 06:16:40.806595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-29 06:16:40.806604 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-29 06:16:40.806610 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806616 | orchestrator | 2025-09-29 06:16:40.806622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-29 06:16:40.806628 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:00.461) 0:03:00.656 ****** 2025-09-29 06:16:40.806634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-29 06:16:40.806640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-29 06:16:40.806646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-29 06:16:40.806652 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806658 | orchestrator | 2025-09-29 06:16:40.806664 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-29 06:16:40.806670 | orchestrator | Monday 29 September 2025 06:09:09 +0000 (0:00:00.554) 0:03:01.211 ****** 2025-09-29 06:16:40.806676 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806682 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806688 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806694 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.806700 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.806711 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.806717 | orchestrator | 2025-09-29 06:16:40.806723 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-29 06:16:40.806729 | orchestrator | Monday 29 September 2025 06:09:10 +0000 (0:00:01.114) 0:03:02.325 ****** 2025-09-29 06:16:40.806735 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-29 06:16:40.806741 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.806747 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-29 06:16:40.806753 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.806759 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-29 06:16:40.806765 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.806771 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-29 06:16:40.806777 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-29 06:16:40.806783 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-29 06:16:40.806789 | orchestrator | 2025-09-29 06:16:40.806795 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-29 06:16:40.806801 | orchestrator | Monday 29 September 2025 06:09:13 +0000 (0:00:02.218) 0:03:04.544 ****** 2025-09-29 06:16:40.806807 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.806813 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.806819 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.806825 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.806831 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.806837 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.806843 | orchestrator | 2025-09-29 06:16:40.806849 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.806855 | orchestrator | Monday 29 September 2025 06:09:16 +0000 (0:00:03.161) 0:03:07.706 ****** 2025-09-29 06:16:40.806861 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.806867 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.806873 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.806879 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.806885 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.806891 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.806897 | orchestrator | 2025-09-29 06:16:40.806903 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-29 06:16:40.806909 | orchestrator | Monday 29 September 2025 06:09:17 +0000 (0:00:01.024) 0:03:08.730 ****** 2025-09-29 06:16:40.806915 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.806921 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.806927 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.806933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.806939 | orchestrator | 2025-09-29 06:16:40.806945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-29 06:16:40.806952 | orchestrator | Monday 29 September 2025 06:09:18 +0000 (0:00:00.851) 0:03:09.581 ****** 2025-09-29 06:16:40.806958 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.806963 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.806970 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.806975 | orchestrator | 2025-09-29 06:16:40.806982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-29 06:16:40.806992 | orchestrator | Monday 29 September 2025 06:09:18 +0000 (0:00:00.279) 0:03:09.861 ****** 2025-09-29 06:16:40.806998 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.807004 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.807010 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.807016 | orchestrator | 2025-09-29 06:16:40.807023 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-29 06:16:40.807029 | orchestrator | Monday 29 September 2025 06:09:19 +0000 (0:00:01.142) 0:03:11.003 ****** 2025-09-29 06:16:40.807035 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.807045 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.807051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.807057 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.807063 | orchestrator | 2025-09-29 06:16:40.807069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-29 06:16:40.807075 | orchestrator | Monday 29 September 2025 06:09:20 +0000 (0:00:00.929) 0:03:11.933 ****** 2025-09-29 06:16:40.807081 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.807087 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.807093 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.807099 | orchestrator | 2025-09-29 06:16:40.807105 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-29 06:16:40.807112 | orchestrator | Monday 29 September 2025 06:09:21 +0000 (0:00:00.621) 0:03:12.554 ****** 2025-09-29 06:16:40.807118 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.807124 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.807130 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.807139 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.807145 | orchestrator | 2025-09-29 06:16:40.807151 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-29 06:16:40.807157 | orchestrator | Monday 29 September 2025 06:09:22 +0000 (0:00:00.950) 0:03:13.505 ****** 2025-09-29 06:16:40.807163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.807169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.807175 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.807181 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807187 | orchestrator | 2025-09-29 06:16:40.807194 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-29 06:16:40.807200 | orchestrator | Monday 29 September 2025 06:09:22 +0000 (0:00:00.386) 0:03:13.892 ****** 2025-09-29 06:16:40.807206 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.807212 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807218 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.807224 | orchestrator | 2025-09-29 06:16:40.807230 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-29 06:16:40.807236 | orchestrator | Monday 29 September 2025 06:09:23 +0000 (0:00:00.866) 0:03:14.758 ****** 2025-09-29 06:16:40.807242 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807248 | orchestrator | 2025-09-29 06:16:40.807254 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-29 06:16:40.807260 | orchestrator | Monday 29 September 2025 06:09:23 +0000 (0:00:00.322) 0:03:15.081 ****** 2025-09-29 06:16:40.807266 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807272 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.807278 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.807284 | orchestrator | 2025-09-29 06:16:40.807290 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-29 06:16:40.807296 | orchestrator | Monday 29 September 2025 06:09:24 +0000 (0:00:00.664) 0:03:15.746 ****** 2025-09-29 06:16:40.807302 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807308 | orchestrator | 2025-09-29 06:16:40.807314 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-29 06:16:40.807320 | orchestrator | Monday 29 September 2025 06:09:24 +0000 (0:00:00.269) 0:03:16.016 ****** 2025-09-29 06:16:40.807326 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807332 | orchestrator | 2025-09-29 06:16:40.807338 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-29 06:16:40.807344 | orchestrator | Monday 29 September 2025 06:09:25 +0000 (0:00:00.514) 0:03:16.530 ****** 2025-09-29 06:16:40.807350 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807361 | orchestrator | 2025-09-29 06:16:40.807368 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-29 06:16:40.807374 | orchestrator | Monday 29 September 2025 06:09:25 +0000 (0:00:00.147) 0:03:16.677 ****** 2025-09-29 06:16:40.807380 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807386 | orchestrator | 2025-09-29 06:16:40.807392 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-29 06:16:40.807398 | orchestrator | Monday 29 September 2025 06:09:25 +0000 (0:00:00.242) 0:03:16.919 ****** 2025-09-29 06:16:40.807404 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807410 | orchestrator | 2025-09-29 06:16:40.807416 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-29 06:16:40.807422 | orchestrator | Monday 29 September 2025 06:09:25 +0000 (0:00:00.410) 0:03:17.330 ****** 2025-09-29 06:16:40.807428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.807434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.807440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.807446 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807452 | orchestrator | 2025-09-29 06:16:40.807475 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-29 06:16:40.807482 | orchestrator | Monday 29 September 2025 06:09:26 +0000 (0:00:00.690) 0:03:18.021 ****** 2025-09-29 06:16:40.807488 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807494 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.807500 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.807506 | orchestrator | 2025-09-29 06:16:40.807515 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-29 06:16:40.807522 | orchestrator | Monday 29 September 2025 06:09:27 +0000 (0:00:00.909) 0:03:18.930 ****** 2025-09-29 06:16:40.807527 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807533 | orchestrator | 2025-09-29 06:16:40.807539 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-29 06:16:40.807545 | orchestrator | Monday 29 September 2025 06:09:27 +0000 (0:00:00.313) 0:03:19.244 ****** 2025-09-29 06:16:40.807551 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807557 | orchestrator | 2025-09-29 06:16:40.807563 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-29 06:16:40.807569 | orchestrator | Monday 29 September 2025 06:09:27 +0000 (0:00:00.159) 0:03:19.404 ****** 2025-09-29 06:16:40.807575 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.807581 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.807587 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.807593 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.807599 | orchestrator | 2025-09-29 06:16:40.807605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-29 06:16:40.807611 | orchestrator | Monday 29 September 2025 06:09:29 +0000 (0:00:01.347) 0:03:20.751 ****** 2025-09-29 06:16:40.807617 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.807623 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.807629 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.807635 | orchestrator | 2025-09-29 06:16:40.807641 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-29 06:16:40.807650 | orchestrator | Monday 29 September 2025 06:09:29 +0000 (0:00:00.321) 0:03:21.072 ****** 2025-09-29 06:16:40.807656 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.807662 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.807668 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.807674 | orchestrator | 2025-09-29 06:16:40.807680 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-29 06:16:40.807686 | orchestrator | Monday 29 September 2025 06:09:30 +0000 (0:00:01.314) 0:03:22.387 ****** 2025-09-29 06:16:40.807692 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.807703 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.807709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.807715 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807721 | orchestrator | 2025-09-29 06:16:40.807727 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-29 06:16:40.807733 | orchestrator | Monday 29 September 2025 06:09:31 +0000 (0:00:00.776) 0:03:23.163 ****** 2025-09-29 06:16:40.807739 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.807745 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.807751 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.807756 | orchestrator | 2025-09-29 06:16:40.807763 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-29 06:16:40.807769 | orchestrator | Monday 29 September 2025 06:09:32 +0000 (0:00:00.436) 0:03:23.600 ****** 2025-09-29 06:16:40.807775 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.807781 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.807787 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.807793 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.807799 | orchestrator | 2025-09-29 06:16:40.807805 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-29 06:16:40.807811 | orchestrator | Monday 29 September 2025 06:09:33 +0000 (0:00:01.338) 0:03:24.939 ****** 2025-09-29 06:16:40.807817 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.807822 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.807828 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.807834 | orchestrator | 2025-09-29 06:16:40.807840 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-29 06:16:40.807846 | orchestrator | Monday 29 September 2025 06:09:33 +0000 (0:00:00.278) 0:03:25.218 ****** 2025-09-29 06:16:40.807852 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.807858 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.807864 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.807870 | orchestrator | 2025-09-29 06:16:40.807876 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-29 06:16:40.807882 | orchestrator | Monday 29 September 2025 06:09:35 +0000 (0:00:01.535) 0:03:26.754 ****** 2025-09-29 06:16:40.807888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.807894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.807900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.807906 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807912 | orchestrator | 2025-09-29 06:16:40.807918 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-29 06:16:40.807924 | orchestrator | Monday 29 September 2025 06:09:35 +0000 (0:00:00.637) 0:03:27.391 ****** 2025-09-29 06:16:40.807930 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.807936 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.807942 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.807948 | orchestrator | 2025-09-29 06:16:40.807954 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-29 06:16:40.807959 | orchestrator | Monday 29 September 2025 06:09:36 +0000 (0:00:00.296) 0:03:27.687 ****** 2025-09-29 06:16:40.807965 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.807971 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.807977 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.807983 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.807989 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.807995 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.808001 | orchestrator | 2025-09-29 06:16:40.808007 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-29 06:16:40.808013 | orchestrator | Monday 29 September 2025 06:09:37 +0000 (0:00:00.864) 0:03:28.552 ****** 2025-09-29 06:16:40.808027 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.808033 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.808039 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.808045 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.808051 | orchestrator | 2025-09-29 06:16:40.808057 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-29 06:16:40.808063 | orchestrator | Monday 29 September 2025 06:09:38 +0000 (0:00:00.988) 0:03:29.540 ****** 2025-09-29 06:16:40.808069 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808075 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808081 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808087 | orchestrator | 2025-09-29 06:16:40.808094 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-29 06:16:40.808100 | orchestrator | Monday 29 September 2025 06:09:38 +0000 (0:00:00.342) 0:03:29.882 ****** 2025-09-29 06:16:40.808106 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.808112 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.808118 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.808123 | orchestrator | 2025-09-29 06:16:40.808129 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-29 06:16:40.808135 | orchestrator | Monday 29 September 2025 06:09:39 +0000 (0:00:01.468) 0:03:31.351 ****** 2025-09-29 06:16:40.808141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.808147 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.808153 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.808159 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808165 | orchestrator | 2025-09-29 06:16:40.808175 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-29 06:16:40.808181 | orchestrator | Monday 29 September 2025 06:09:40 +0000 (0:00:00.560) 0:03:31.911 ****** 2025-09-29 06:16:40.808187 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808193 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808199 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808205 | orchestrator | 2025-09-29 06:16:40.808211 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-29 06:16:40.808217 | orchestrator | 2025-09-29 06:16:40.808223 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.808229 | orchestrator | Monday 29 September 2025 06:09:41 +0000 (0:00:00.865) 0:03:32.777 ****** 2025-09-29 06:16:40.808235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.808241 | orchestrator | 2025-09-29 06:16:40.808247 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.808253 | orchestrator | Monday 29 September 2025 06:09:42 +0000 (0:00:00.705) 0:03:33.483 ****** 2025-09-29 06:16:40.808259 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.808265 | orchestrator | 2025-09-29 06:16:40.808271 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.808277 | orchestrator | Monday 29 September 2025 06:09:42 +0000 (0:00:00.710) 0:03:34.193 ****** 2025-09-29 06:16:40.808283 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808289 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808295 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808301 | orchestrator | 2025-09-29 06:16:40.808307 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.808313 | orchestrator | Monday 29 September 2025 06:09:43 +0000 (0:00:00.870) 0:03:35.064 ****** 2025-09-29 06:16:40.808319 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808325 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808336 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808342 | orchestrator | 2025-09-29 06:16:40.808348 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.808354 | orchestrator | Monday 29 September 2025 06:09:44 +0000 (0:00:00.556) 0:03:35.620 ****** 2025-09-29 06:16:40.808360 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808366 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808372 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808378 | orchestrator | 2025-09-29 06:16:40.808384 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.808390 | orchestrator | Monday 29 September 2025 06:09:44 +0000 (0:00:00.244) 0:03:35.864 ****** 2025-09-29 06:16:40.808396 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808402 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808408 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808414 | orchestrator | 2025-09-29 06:16:40.808420 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.808426 | orchestrator | Monday 29 September 2025 06:09:44 +0000 (0:00:00.255) 0:03:36.120 ****** 2025-09-29 06:16:40.808432 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808438 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808444 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808450 | orchestrator | 2025-09-29 06:16:40.808501 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.808512 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:00.675) 0:03:36.795 ****** 2025-09-29 06:16:40.808522 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808532 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808541 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808552 | orchestrator | 2025-09-29 06:16:40.808563 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.808574 | orchestrator | Monday 29 September 2025 06:09:45 +0000 (0:00:00.433) 0:03:37.228 ****** 2025-09-29 06:16:40.808584 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808594 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808605 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808611 | orchestrator | 2025-09-29 06:16:40.808617 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.808628 | orchestrator | Monday 29 September 2025 06:09:46 +0000 (0:00:00.300) 0:03:37.529 ****** 2025-09-29 06:16:40.808635 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808641 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808647 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808653 | orchestrator | 2025-09-29 06:16:40.808659 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.808665 | orchestrator | Monday 29 September 2025 06:09:46 +0000 (0:00:00.824) 0:03:38.353 ****** 2025-09-29 06:16:40.808671 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808677 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808683 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808689 | orchestrator | 2025-09-29 06:16:40.808695 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.808701 | orchestrator | Monday 29 September 2025 06:09:47 +0000 (0:00:00.673) 0:03:39.026 ****** 2025-09-29 06:16:40.808708 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808714 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808720 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808726 | orchestrator | 2025-09-29 06:16:40.808732 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.808738 | orchestrator | Monday 29 September 2025 06:09:47 +0000 (0:00:00.273) 0:03:39.300 ****** 2025-09-29 06:16:40.808744 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808750 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808755 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808762 | orchestrator | 2025-09-29 06:16:40.808768 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.808781 | orchestrator | Monday 29 September 2025 06:09:48 +0000 (0:00:00.455) 0:03:39.756 ****** 2025-09-29 06:16:40.808787 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808797 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808804 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808810 | orchestrator | 2025-09-29 06:16:40.808816 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.808822 | orchestrator | Monday 29 September 2025 06:09:48 +0000 (0:00:00.254) 0:03:40.010 ****** 2025-09-29 06:16:40.808828 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808834 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808840 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808846 | orchestrator | 2025-09-29 06:16:40.808852 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.808858 | orchestrator | Monday 29 September 2025 06:09:48 +0000 (0:00:00.266) 0:03:40.277 ****** 2025-09-29 06:16:40.808864 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808870 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808876 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808882 | orchestrator | 2025-09-29 06:16:40.808888 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.808894 | orchestrator | Monday 29 September 2025 06:09:49 +0000 (0:00:00.246) 0:03:40.524 ****** 2025-09-29 06:16:40.808900 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808906 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808912 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808919 | orchestrator | 2025-09-29 06:16:40.808925 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.808931 | orchestrator | Monday 29 September 2025 06:09:49 +0000 (0:00:00.398) 0:03:40.922 ****** 2025-09-29 06:16:40.808937 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.808943 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.808949 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.808955 | orchestrator | 2025-09-29 06:16:40.808961 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.808967 | orchestrator | Monday 29 September 2025 06:09:49 +0000 (0:00:00.285) 0:03:41.207 ****** 2025-09-29 06:16:40.808973 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.808979 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.808985 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.808991 | orchestrator | 2025-09-29 06:16:40.808997 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.809003 | orchestrator | Monday 29 September 2025 06:09:50 +0000 (0:00:00.264) 0:03:41.472 ****** 2025-09-29 06:16:40.809009 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809015 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809021 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809027 | orchestrator | 2025-09-29 06:16:40.809033 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.809040 | orchestrator | Monday 29 September 2025 06:09:50 +0000 (0:00:00.294) 0:03:41.766 ****** 2025-09-29 06:16:40.809046 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809052 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809058 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809064 | orchestrator | 2025-09-29 06:16:40.809070 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-29 06:16:40.809076 | orchestrator | Monday 29 September 2025 06:09:50 +0000 (0:00:00.625) 0:03:42.392 ****** 2025-09-29 06:16:40.809082 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809088 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809094 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809100 | orchestrator | 2025-09-29 06:16:40.809106 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-29 06:16:40.809112 | orchestrator | Monday 29 September 2025 06:09:51 +0000 (0:00:00.305) 0:03:42.697 ****** 2025-09-29 06:16:40.809123 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.809129 | orchestrator | 2025-09-29 06:16:40.809135 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-29 06:16:40.809141 | orchestrator | Monday 29 September 2025 06:09:51 +0000 (0:00:00.599) 0:03:43.297 ****** 2025-09-29 06:16:40.809148 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.809153 | orchestrator | 2025-09-29 06:16:40.809160 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-29 06:16:40.809166 | orchestrator | Monday 29 September 2025 06:09:52 +0000 (0:00:00.144) 0:03:43.441 ****** 2025-09-29 06:16:40.809172 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-29 06:16:40.809178 | orchestrator | 2025-09-29 06:16:40.809187 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-29 06:16:40.809193 | orchestrator | Monday 29 September 2025 06:09:52 +0000 (0:00:00.907) 0:03:44.349 ****** 2025-09-29 06:16:40.809200 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809206 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809212 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809218 | orchestrator | 2025-09-29 06:16:40.809224 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-29 06:16:40.809230 | orchestrator | Monday 29 September 2025 06:09:53 +0000 (0:00:00.340) 0:03:44.690 ****** 2025-09-29 06:16:40.809236 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809242 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809248 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809254 | orchestrator | 2025-09-29 06:16:40.809261 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-29 06:16:40.809267 | orchestrator | Monday 29 September 2025 06:09:53 +0000 (0:00:00.332) 0:03:45.023 ****** 2025-09-29 06:16:40.809273 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809279 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809285 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809291 | orchestrator | 2025-09-29 06:16:40.809297 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-29 06:16:40.809303 | orchestrator | Monday 29 September 2025 06:09:55 +0000 (0:00:01.497) 0:03:46.520 ****** 2025-09-29 06:16:40.809309 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809315 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809321 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809327 | orchestrator | 2025-09-29 06:16:40.809333 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-29 06:16:40.809343 | orchestrator | Monday 29 September 2025 06:09:55 +0000 (0:00:00.878) 0:03:47.398 ****** 2025-09-29 06:16:40.809349 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809355 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809361 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809367 | orchestrator | 2025-09-29 06:16:40.809373 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-29 06:16:40.809379 | orchestrator | Monday 29 September 2025 06:09:56 +0000 (0:00:00.650) 0:03:48.049 ****** 2025-09-29 06:16:40.809385 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809391 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809397 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809403 | orchestrator | 2025-09-29 06:16:40.809409 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-29 06:16:40.809415 | orchestrator | Monday 29 September 2025 06:09:57 +0000 (0:00:00.648) 0:03:48.697 ****** 2025-09-29 06:16:40.809421 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809427 | orchestrator | 2025-09-29 06:16:40.809433 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-29 06:16:40.809439 | orchestrator | Monday 29 September 2025 06:09:58 +0000 (0:00:01.397) 0:03:50.095 ****** 2025-09-29 06:16:40.809449 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809469 | orchestrator | 2025-09-29 06:16:40.809476 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-29 06:16:40.809482 | orchestrator | Monday 29 September 2025 06:09:59 +0000 (0:00:00.722) 0:03:50.817 ****** 2025-09-29 06:16:40.809488 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.809494 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.809500 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.809506 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:16:40.809512 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-29 06:16:40.809518 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:16:40.809524 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:16:40.809530 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-29 06:16:40.809536 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:16:40.809542 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-29 06:16:40.809548 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-29 06:16:40.809554 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-29 06:16:40.809560 | orchestrator | 2025-09-29 06:16:40.809566 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-29 06:16:40.809572 | orchestrator | Monday 29 September 2025 06:10:03 +0000 (0:00:03.712) 0:03:54.530 ****** 2025-09-29 06:16:40.809578 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809584 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809591 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809597 | orchestrator | 2025-09-29 06:16:40.809602 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-29 06:16:40.809608 | orchestrator | Monday 29 September 2025 06:10:04 +0000 (0:00:01.162) 0:03:55.693 ****** 2025-09-29 06:16:40.809614 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809620 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809626 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809632 | orchestrator | 2025-09-29 06:16:40.809638 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-29 06:16:40.809644 | orchestrator | Monday 29 September 2025 06:10:04 +0000 (0:00:00.334) 0:03:56.028 ****** 2025-09-29 06:16:40.809650 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.809657 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.809662 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.809669 | orchestrator | 2025-09-29 06:16:40.809675 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-29 06:16:40.809681 | orchestrator | Monday 29 September 2025 06:10:04 +0000 (0:00:00.301) 0:03:56.330 ****** 2025-09-29 06:16:40.809687 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809693 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809699 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809705 | orchestrator | 2025-09-29 06:16:40.809711 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-29 06:16:40.809720 | orchestrator | Monday 29 September 2025 06:10:06 +0000 (0:00:02.037) 0:03:58.367 ****** 2025-09-29 06:16:40.809727 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809733 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809739 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809745 | orchestrator | 2025-09-29 06:16:40.809751 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-29 06:16:40.809757 | orchestrator | Monday 29 September 2025 06:10:08 +0000 (0:00:01.494) 0:03:59.862 ****** 2025-09-29 06:16:40.809763 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.809769 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.809775 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.809785 | orchestrator | 2025-09-29 06:16:40.809791 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-29 06:16:40.809797 | orchestrator | Monday 29 September 2025 06:10:08 +0000 (0:00:00.308) 0:04:00.170 ****** 2025-09-29 06:16:40.809803 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.809809 | orchestrator | 2025-09-29 06:16:40.809815 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-29 06:16:40.809821 | orchestrator | Monday 29 September 2025 06:10:09 +0000 (0:00:00.533) 0:04:00.704 ****** 2025-09-29 06:16:40.809827 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.809833 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.809839 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.809845 | orchestrator | 2025-09-29 06:16:40.809851 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-29 06:16:40.809858 | orchestrator | Monday 29 September 2025 06:10:09 +0000 (0:00:00.535) 0:04:01.240 ****** 2025-09-29 06:16:40.809867 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.809873 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.809879 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.809886 | orchestrator | 2025-09-29 06:16:40.809892 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-29 06:16:40.809898 | orchestrator | Monday 29 September 2025 06:10:10 +0000 (0:00:00.332) 0:04:01.573 ****** 2025-09-29 06:16:40.809904 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.809910 | orchestrator | 2025-09-29 06:16:40.809916 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-29 06:16:40.809923 | orchestrator | Monday 29 September 2025 06:10:10 +0000 (0:00:00.507) 0:04:02.081 ****** 2025-09-29 06:16:40.809929 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809935 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809941 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809947 | orchestrator | 2025-09-29 06:16:40.809953 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-29 06:16:40.809959 | orchestrator | Monday 29 September 2025 06:10:12 +0000 (0:00:02.088) 0:04:04.170 ****** 2025-09-29 06:16:40.809965 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.809971 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.809977 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.809983 | orchestrator | 2025-09-29 06:16:40.809989 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-29 06:16:40.809995 | orchestrator | Monday 29 September 2025 06:10:13 +0000 (0:00:01.139) 0:04:05.310 ****** 2025-09-29 06:16:40.810001 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.810007 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.810098 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.810108 | orchestrator | 2025-09-29 06:16:40.810114 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-29 06:16:40.810120 | orchestrator | Monday 29 September 2025 06:10:15 +0000 (0:00:01.657) 0:04:06.967 ****** 2025-09-29 06:16:40.810126 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.810132 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.810138 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.810144 | orchestrator | 2025-09-29 06:16:40.810150 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-29 06:16:40.810156 | orchestrator | Monday 29 September 2025 06:10:17 +0000 (0:00:02.266) 0:04:09.233 ****** 2025-09-29 06:16:40.810162 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.810168 | orchestrator | 2025-09-29 06:16:40.810175 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-29 06:16:40.810181 | orchestrator | Monday 29 September 2025 06:10:18 +0000 (0:00:00.638) 0:04:09.872 ****** 2025-09-29 06:16:40.810192 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-29 06:16:40.810198 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810204 | orchestrator | 2025-09-29 06:16:40.810210 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-29 06:16:40.810216 | orchestrator | Monday 29 September 2025 06:10:40 +0000 (0:00:21.839) 0:04:31.711 ****** 2025-09-29 06:16:40.810222 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810228 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810234 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810240 | orchestrator | 2025-09-29 06:16:40.810246 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-29 06:16:40.810252 | orchestrator | Monday 29 September 2025 06:10:50 +0000 (0:00:09.810) 0:04:41.521 ****** 2025-09-29 06:16:40.810258 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810264 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810270 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810276 | orchestrator | 2025-09-29 06:16:40.810282 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-29 06:16:40.810288 | orchestrator | Monday 29 September 2025 06:10:50 +0000 (0:00:00.383) 0:04:41.904 ****** 2025-09-29 06:16:40.810317 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-29 06:16:40.810326 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-29 06:16:40.810334 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-29 06:16:40.810345 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-29 06:16:40.810352 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-29 06:16:40.810359 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__b2cc2a132406e0a690bbec382f8d7304702e0c80'}])  2025-09-29 06:16:40.810366 | orchestrator | 2025-09-29 06:16:40.810372 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.810378 | orchestrator | Monday 29 September 2025 06:11:06 +0000 (0:00:15.759) 0:04:57.663 ****** 2025-09-29 06:16:40.810389 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810395 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810401 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810407 | orchestrator | 2025-09-29 06:16:40.810413 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-29 06:16:40.810419 | orchestrator | Monday 29 September 2025 06:11:06 +0000 (0:00:00.314) 0:04:57.978 ****** 2025-09-29 06:16:40.810425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.810432 | orchestrator | 2025-09-29 06:16:40.810438 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-29 06:16:40.810444 | orchestrator | Monday 29 September 2025 06:11:07 +0000 (0:00:00.748) 0:04:58.726 ****** 2025-09-29 06:16:40.810450 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810472 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810479 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810486 | orchestrator | 2025-09-29 06:16:40.810492 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-29 06:16:40.810498 | orchestrator | Monday 29 September 2025 06:11:07 +0000 (0:00:00.342) 0:04:59.068 ****** 2025-09-29 06:16:40.810504 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810510 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810516 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810522 | orchestrator | 2025-09-29 06:16:40.810528 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-29 06:16:40.810535 | orchestrator | Monday 29 September 2025 06:11:07 +0000 (0:00:00.347) 0:04:59.416 ****** 2025-09-29 06:16:40.810541 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.810547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.810553 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.810559 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810565 | orchestrator | 2025-09-29 06:16:40.810571 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-29 06:16:40.810577 | orchestrator | Monday 29 September 2025 06:11:08 +0000 (0:00:00.872) 0:05:00.288 ****** 2025-09-29 06:16:40.810583 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810589 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810595 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810601 | orchestrator | 2025-09-29 06:16:40.810608 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-29 06:16:40.810614 | orchestrator | 2025-09-29 06:16:40.810620 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.810645 | orchestrator | Monday 29 September 2025 06:11:09 +0000 (0:00:00.720) 0:05:01.008 ****** 2025-09-29 06:16:40.810652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-29 06:16:40.810659 | orchestrator | 2025-09-29 06:16:40.810665 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.810671 | orchestrator | Monday 29 September 2025 06:11:10 +0000 (0:00:00.414) 0:05:01.422 ****** 2025-09-29 06:16:40.810677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.810683 | orchestrator | 2025-09-29 06:16:40.810690 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.810696 | orchestrator | Monday 29 September 2025 06:11:10 +0000 (0:00:00.512) 0:05:01.934 ****** 2025-09-29 06:16:40.810702 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810708 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810714 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810720 | orchestrator | 2025-09-29 06:16:40.810726 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.810737 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:00.683) 0:05:02.618 ****** 2025-09-29 06:16:40.810743 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810749 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810755 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810761 | orchestrator | 2025-09-29 06:16:40.810767 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.810777 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:00.222) 0:05:02.840 ****** 2025-09-29 06:16:40.810783 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810789 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810795 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810801 | orchestrator | 2025-09-29 06:16:40.810808 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.810814 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:00.281) 0:05:03.121 ****** 2025-09-29 06:16:40.810820 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810826 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810832 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810838 | orchestrator | 2025-09-29 06:16:40.810844 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.810850 | orchestrator | Monday 29 September 2025 06:11:11 +0000 (0:00:00.261) 0:05:03.383 ****** 2025-09-29 06:16:40.810856 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810862 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810869 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810875 | orchestrator | 2025-09-29 06:16:40.810881 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.810887 | orchestrator | Monday 29 September 2025 06:11:12 +0000 (0:00:00.795) 0:05:04.179 ****** 2025-09-29 06:16:40.810893 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810899 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810905 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810911 | orchestrator | 2025-09-29 06:16:40.810917 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.810923 | orchestrator | Monday 29 September 2025 06:11:13 +0000 (0:00:00.268) 0:05:04.447 ****** 2025-09-29 06:16:40.810930 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.810936 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.810942 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.810948 | orchestrator | 2025-09-29 06:16:40.810954 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.810960 | orchestrator | Monday 29 September 2025 06:11:13 +0000 (0:00:00.256) 0:05:04.704 ****** 2025-09-29 06:16:40.810966 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.810972 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.810978 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.810984 | orchestrator | 2025-09-29 06:16:40.810991 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.810997 | orchestrator | Monday 29 September 2025 06:11:13 +0000 (0:00:00.658) 0:05:05.362 ****** 2025-09-29 06:16:40.811003 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811009 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811015 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811021 | orchestrator | 2025-09-29 06:16:40.811027 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.811033 | orchestrator | Monday 29 September 2025 06:11:14 +0000 (0:00:00.857) 0:05:06.219 ****** 2025-09-29 06:16:40.811039 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811045 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811051 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811057 | orchestrator | 2025-09-29 06:16:40.811063 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.811070 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.298) 0:05:06.518 ****** 2025-09-29 06:16:40.811080 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811086 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811092 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811098 | orchestrator | 2025-09-29 06:16:40.811104 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.811110 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.350) 0:05:06.868 ****** 2025-09-29 06:16:40.811116 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811123 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811129 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811135 | orchestrator | 2025-09-29 06:16:40.811141 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.811147 | orchestrator | Monday 29 September 2025 06:11:15 +0000 (0:00:00.267) 0:05:07.136 ****** 2025-09-29 06:16:40.811153 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811159 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811165 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811172 | orchestrator | 2025-09-29 06:16:40.811178 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.811202 | orchestrator | Monday 29 September 2025 06:11:16 +0000 (0:00:00.422) 0:05:07.558 ****** 2025-09-29 06:16:40.811209 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811216 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811222 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811228 | orchestrator | 2025-09-29 06:16:40.811234 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.811240 | orchestrator | Monday 29 September 2025 06:11:16 +0000 (0:00:00.256) 0:05:07.814 ****** 2025-09-29 06:16:40.811246 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811252 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811258 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811264 | orchestrator | 2025-09-29 06:16:40.811270 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.811276 | orchestrator | Monday 29 September 2025 06:11:16 +0000 (0:00:00.248) 0:05:08.063 ****** 2025-09-29 06:16:40.811282 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811289 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811295 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811301 | orchestrator | 2025-09-29 06:16:40.811307 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.811313 | orchestrator | Monday 29 September 2025 06:11:16 +0000 (0:00:00.269) 0:05:08.332 ****** 2025-09-29 06:16:40.811319 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811325 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811331 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811337 | orchestrator | 2025-09-29 06:16:40.811343 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.811353 | orchestrator | Monday 29 September 2025 06:11:17 +0000 (0:00:00.416) 0:05:08.749 ****** 2025-09-29 06:16:40.811359 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811365 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811371 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811377 | orchestrator | 2025-09-29 06:16:40.811384 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.811390 | orchestrator | Monday 29 September 2025 06:11:17 +0000 (0:00:00.326) 0:05:09.076 ****** 2025-09-29 06:16:40.811396 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811402 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811408 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811414 | orchestrator | 2025-09-29 06:16:40.811420 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-29 06:16:40.811426 | orchestrator | Monday 29 September 2025 06:11:18 +0000 (0:00:00.536) 0:05:09.612 ****** 2025-09-29 06:16:40.811432 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:16:40.811447 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.811453 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.811471 | orchestrator | 2025-09-29 06:16:40.811477 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-29 06:16:40.811483 | orchestrator | Monday 29 September 2025 06:11:19 +0000 (0:00:00.927) 0:05:10.540 ****** 2025-09-29 06:16:40.811489 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.811495 | orchestrator | 2025-09-29 06:16:40.811502 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-29 06:16:40.811508 | orchestrator | Monday 29 September 2025 06:11:19 +0000 (0:00:00.720) 0:05:11.261 ****** 2025-09-29 06:16:40.811514 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.811520 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.811526 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.811532 | orchestrator | 2025-09-29 06:16:40.811538 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-29 06:16:40.811544 | orchestrator | Monday 29 September 2025 06:11:20 +0000 (0:00:00.719) 0:05:11.980 ****** 2025-09-29 06:16:40.811550 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811556 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811562 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811568 | orchestrator | 2025-09-29 06:16:40.811574 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-29 06:16:40.811580 | orchestrator | Monday 29 September 2025 06:11:20 +0000 (0:00:00.261) 0:05:12.242 ****** 2025-09-29 06:16:40.811587 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.811593 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.811599 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.811605 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-29 06:16:40.811611 | orchestrator | 2025-09-29 06:16:40.811617 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-29 06:16:40.811623 | orchestrator | Monday 29 September 2025 06:11:31 +0000 (0:00:10.585) 0:05:22.828 ****** 2025-09-29 06:16:40.811629 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811635 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811641 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811647 | orchestrator | 2025-09-29 06:16:40.811654 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-29 06:16:40.811660 | orchestrator | Monday 29 September 2025 06:11:32 +0000 (0:00:00.598) 0:05:23.426 ****** 2025-09-29 06:16:40.811666 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-29 06:16:40.811672 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-29 06:16:40.811678 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-29 06:16:40.811684 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.811690 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.811696 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.811702 | orchestrator | 2025-09-29 06:16:40.811708 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-29 06:16:40.811715 | orchestrator | Monday 29 September 2025 06:11:34 +0000 (0:00:02.141) 0:05:25.568 ****** 2025-09-29 06:16:40.811739 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-29 06:16:40.811747 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-29 06:16:40.811753 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-29 06:16:40.811759 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:16:40.811765 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-29 06:16:40.811771 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-29 06:16:40.811785 | orchestrator | 2025-09-29 06:16:40.811791 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-29 06:16:40.811797 | orchestrator | Monday 29 September 2025 06:11:35 +0000 (0:00:01.174) 0:05:26.742 ****** 2025-09-29 06:16:40.811803 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.811809 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.811815 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.811821 | orchestrator | 2025-09-29 06:16:40.811828 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-29 06:16:40.811834 | orchestrator | Monday 29 September 2025 06:11:35 +0000 (0:00:00.677) 0:05:27.419 ****** 2025-09-29 06:16:40.811840 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811846 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811852 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811858 | orchestrator | 2025-09-29 06:16:40.811864 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-29 06:16:40.811871 | orchestrator | Monday 29 September 2025 06:11:36 +0000 (0:00:00.563) 0:05:27.982 ****** 2025-09-29 06:16:40.811877 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811883 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811889 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811895 | orchestrator | 2025-09-29 06:16:40.811904 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-29 06:16:40.811910 | orchestrator | Monday 29 September 2025 06:11:36 +0000 (0:00:00.343) 0:05:28.326 ****** 2025-09-29 06:16:40.811917 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.811923 | orchestrator | 2025-09-29 06:16:40.811929 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-29 06:16:40.811935 | orchestrator | Monday 29 September 2025 06:11:37 +0000 (0:00:00.589) 0:05:28.915 ****** 2025-09-29 06:16:40.811941 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811947 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811953 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811959 | orchestrator | 2025-09-29 06:16:40.811965 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-29 06:16:40.811972 | orchestrator | Monday 29 September 2025 06:11:38 +0000 (0:00:00.601) 0:05:29.517 ****** 2025-09-29 06:16:40.811978 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.811984 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.811990 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.811996 | orchestrator | 2025-09-29 06:16:40.812002 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-29 06:16:40.812008 | orchestrator | Monday 29 September 2025 06:11:38 +0000 (0:00:00.318) 0:05:29.836 ****** 2025-09-29 06:16:40.812014 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.812021 | orchestrator | 2025-09-29 06:16:40.812027 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-29 06:16:40.812033 | orchestrator | Monday 29 September 2025 06:11:38 +0000 (0:00:00.505) 0:05:30.341 ****** 2025-09-29 06:16:40.812039 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812045 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812051 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812057 | orchestrator | 2025-09-29 06:16:40.812063 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-29 06:16:40.812069 | orchestrator | Monday 29 September 2025 06:11:40 +0000 (0:00:01.254) 0:05:31.596 ****** 2025-09-29 06:16:40.812075 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812081 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812088 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812094 | orchestrator | 2025-09-29 06:16:40.812100 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-29 06:16:40.812106 | orchestrator | Monday 29 September 2025 06:11:41 +0000 (0:00:01.376) 0:05:32.973 ****** 2025-09-29 06:16:40.812116 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812122 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812129 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812135 | orchestrator | 2025-09-29 06:16:40.812141 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-29 06:16:40.812147 | orchestrator | Monday 29 September 2025 06:11:43 +0000 (0:00:01.791) 0:05:34.764 ****** 2025-09-29 06:16:40.812153 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812159 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812165 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812171 | orchestrator | 2025-09-29 06:16:40.812177 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-29 06:16:40.812183 | orchestrator | Monday 29 September 2025 06:11:45 +0000 (0:00:01.788) 0:05:36.553 ****** 2025-09-29 06:16:40.812189 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.812195 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.812202 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-29 06:16:40.812208 | orchestrator | 2025-09-29 06:16:40.812214 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-29 06:16:40.812220 | orchestrator | Monday 29 September 2025 06:11:45 +0000 (0:00:00.341) 0:05:36.895 ****** 2025-09-29 06:16:40.812226 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-29 06:16:40.812232 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-29 06:16:40.812256 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-29 06:16:40.812263 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-29 06:16:40.812269 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.812276 | orchestrator | 2025-09-29 06:16:40.812282 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-29 06:16:40.812288 | orchestrator | Monday 29 September 2025 06:12:10 +0000 (0:00:24.594) 0:06:01.489 ****** 2025-09-29 06:16:40.812294 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.812300 | orchestrator | 2025-09-29 06:16:40.812306 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-29 06:16:40.812312 | orchestrator | Monday 29 September 2025 06:12:11 +0000 (0:00:01.292) 0:06:02.782 ****** 2025-09-29 06:16:40.812318 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.812325 | orchestrator | 2025-09-29 06:16:40.812331 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-29 06:16:40.812337 | orchestrator | Monday 29 September 2025 06:12:11 +0000 (0:00:00.252) 0:06:03.035 ****** 2025-09-29 06:16:40.812343 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.812349 | orchestrator | 2025-09-29 06:16:40.812355 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-29 06:16:40.812361 | orchestrator | Monday 29 September 2025 06:12:11 +0000 (0:00:00.119) 0:06:03.154 ****** 2025-09-29 06:16:40.812367 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-29 06:16:40.812377 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-29 06:16:40.812383 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-29 06:16:40.812389 | orchestrator | 2025-09-29 06:16:40.812395 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-29 06:16:40.812401 | orchestrator | Monday 29 September 2025 06:12:18 +0000 (0:00:06.485) 0:06:09.639 ****** 2025-09-29 06:16:40.812407 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-29 06:16:40.812413 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-29 06:16:40.812424 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-29 06:16:40.812430 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-29 06:16:40.812436 | orchestrator | 2025-09-29 06:16:40.812442 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.812448 | orchestrator | Monday 29 September 2025 06:12:22 +0000 (0:00:04.690) 0:06:14.330 ****** 2025-09-29 06:16:40.812454 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812478 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812484 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812490 | orchestrator | 2025-09-29 06:16:40.812497 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-29 06:16:40.812503 | orchestrator | Monday 29 September 2025 06:12:23 +0000 (0:00:00.758) 0:06:15.089 ****** 2025-09-29 06:16:40.812509 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:16:40.812515 | orchestrator | 2025-09-29 06:16:40.812521 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-29 06:16:40.812527 | orchestrator | Monday 29 September 2025 06:12:24 +0000 (0:00:00.474) 0:06:15.563 ****** 2025-09-29 06:16:40.812533 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.812539 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.812545 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.812551 | orchestrator | 2025-09-29 06:16:40.812558 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-29 06:16:40.812564 | orchestrator | Monday 29 September 2025 06:12:24 +0000 (0:00:00.261) 0:06:15.825 ****** 2025-09-29 06:16:40.812570 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.812576 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.812582 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.812588 | orchestrator | 2025-09-29 06:16:40.812594 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-29 06:16:40.812600 | orchestrator | Monday 29 September 2025 06:12:25 +0000 (0:00:01.281) 0:06:17.106 ****** 2025-09-29 06:16:40.812606 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-29 06:16:40.812612 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-29 06:16:40.812618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-29 06:16:40.812624 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.812631 | orchestrator | 2025-09-29 06:16:40.812637 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-29 06:16:40.812643 | orchestrator | Monday 29 September 2025 06:12:26 +0000 (0:00:00.536) 0:06:17.643 ****** 2025-09-29 06:16:40.812649 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.812655 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.812661 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.812667 | orchestrator | 2025-09-29 06:16:40.812673 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-29 06:16:40.812679 | orchestrator | 2025-09-29 06:16:40.812686 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.812692 | orchestrator | Monday 29 September 2025 06:12:26 +0000 (0:00:00.475) 0:06:18.118 ****** 2025-09-29 06:16:40.812698 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.812704 | orchestrator | 2025-09-29 06:16:40.812710 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.812716 | orchestrator | Monday 29 September 2025 06:12:27 +0000 (0:00:00.628) 0:06:18.746 ****** 2025-09-29 06:16:40.812743 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.812751 | orchestrator | 2025-09-29 06:16:40.812758 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.812769 | orchestrator | Monday 29 September 2025 06:12:27 +0000 (0:00:00.444) 0:06:19.191 ****** 2025-09-29 06:16:40.812775 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.812781 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.812787 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.812793 | orchestrator | 2025-09-29 06:16:40.812799 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.812805 | orchestrator | Monday 29 September 2025 06:12:28 +0000 (0:00:00.247) 0:06:19.439 ****** 2025-09-29 06:16:40.812811 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.812817 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.812823 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.812829 | orchestrator | 2025-09-29 06:16:40.812836 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.812842 | orchestrator | Monday 29 September 2025 06:12:28 +0000 (0:00:00.803) 0:06:20.242 ****** 2025-09-29 06:16:40.812848 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.812854 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.812860 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.812866 | orchestrator | 2025-09-29 06:16:40.812873 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.812879 | orchestrator | Monday 29 September 2025 06:12:29 +0000 (0:00:00.635) 0:06:20.878 ****** 2025-09-29 06:16:40.812885 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.812891 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.812897 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.812903 | orchestrator | 2025-09-29 06:16:40.812910 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.812916 | orchestrator | Monday 29 September 2025 06:12:30 +0000 (0:00:00.638) 0:06:21.516 ****** 2025-09-29 06:16:40.812922 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.812929 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.812935 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.812941 | orchestrator | 2025-09-29 06:16:40.812947 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.812953 | orchestrator | Monday 29 September 2025 06:12:30 +0000 (0:00:00.268) 0:06:21.785 ****** 2025-09-29 06:16:40.812959 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.812966 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.812972 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.812978 | orchestrator | 2025-09-29 06:16:40.812984 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.812990 | orchestrator | Monday 29 September 2025 06:12:30 +0000 (0:00:00.403) 0:06:22.189 ****** 2025-09-29 06:16:40.812996 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813002 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813008 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813014 | orchestrator | 2025-09-29 06:16:40.813020 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.813026 | orchestrator | Monday 29 September 2025 06:12:31 +0000 (0:00:00.252) 0:06:22.441 ****** 2025-09-29 06:16:40.813032 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813039 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813045 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813051 | orchestrator | 2025-09-29 06:16:40.813057 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.813063 | orchestrator | Monday 29 September 2025 06:12:31 +0000 (0:00:00.612) 0:06:23.054 ****** 2025-09-29 06:16:40.813069 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813075 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813081 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813087 | orchestrator | 2025-09-29 06:16:40.813093 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.813100 | orchestrator | Monday 29 September 2025 06:12:32 +0000 (0:00:00.680) 0:06:23.734 ****** 2025-09-29 06:16:40.813110 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813116 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813122 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813128 | orchestrator | 2025-09-29 06:16:40.813134 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.813140 | orchestrator | Monday 29 September 2025 06:12:32 +0000 (0:00:00.434) 0:06:24.168 ****** 2025-09-29 06:16:40.813147 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813153 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813159 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813165 | orchestrator | 2025-09-29 06:16:40.813171 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.813177 | orchestrator | Monday 29 September 2025 06:12:33 +0000 (0:00:00.267) 0:06:24.436 ****** 2025-09-29 06:16:40.813183 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813189 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813195 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813201 | orchestrator | 2025-09-29 06:16:40.813207 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.813239 | orchestrator | Monday 29 September 2025 06:12:33 +0000 (0:00:00.275) 0:06:24.711 ****** 2025-09-29 06:16:40.813246 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813252 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813258 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813264 | orchestrator | 2025-09-29 06:16:40.813270 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.813277 | orchestrator | Monday 29 September 2025 06:12:33 +0000 (0:00:00.275) 0:06:24.987 ****** 2025-09-29 06:16:40.813283 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813289 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813295 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813301 | orchestrator | 2025-09-29 06:16:40.813307 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.813313 | orchestrator | Monday 29 September 2025 06:12:34 +0000 (0:00:00.457) 0:06:25.444 ****** 2025-09-29 06:16:40.813319 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813325 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813331 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813337 | orchestrator | 2025-09-29 06:16:40.813347 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.813353 | orchestrator | Monday 29 September 2025 06:12:34 +0000 (0:00:00.246) 0:06:25.691 ****** 2025-09-29 06:16:40.813359 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813365 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813372 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813378 | orchestrator | 2025-09-29 06:16:40.813384 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.813390 | orchestrator | Monday 29 September 2025 06:12:34 +0000 (0:00:00.260) 0:06:25.951 ****** 2025-09-29 06:16:40.813396 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813402 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813409 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813415 | orchestrator | 2025-09-29 06:16:40.813421 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.813427 | orchestrator | Monday 29 September 2025 06:12:34 +0000 (0:00:00.272) 0:06:26.224 ****** 2025-09-29 06:16:40.813433 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813439 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813445 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813451 | orchestrator | 2025-09-29 06:16:40.813469 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.813476 | orchestrator | Monday 29 September 2025 06:12:35 +0000 (0:00:00.432) 0:06:26.656 ****** 2025-09-29 06:16:40.813482 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813488 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813499 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813505 | orchestrator | 2025-09-29 06:16:40.813511 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-29 06:16:40.813520 | orchestrator | Monday 29 September 2025 06:12:35 +0000 (0:00:00.442) 0:06:27.099 ****** 2025-09-29 06:16:40.813526 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813532 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813538 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813544 | orchestrator | 2025-09-29 06:16:40.813551 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-29 06:16:40.813557 | orchestrator | Monday 29 September 2025 06:12:35 +0000 (0:00:00.275) 0:06:27.374 ****** 2025-09-29 06:16:40.813563 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-29 06:16:40.813569 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:16:40.813575 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:16:40.813581 | orchestrator | 2025-09-29 06:16:40.813587 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-29 06:16:40.813593 | orchestrator | Monday 29 September 2025 06:12:36 +0000 (0:00:00.921) 0:06:28.295 ****** 2025-09-29 06:16:40.813599 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.813605 | orchestrator | 2025-09-29 06:16:40.813612 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-29 06:16:40.813618 | orchestrator | Monday 29 September 2025 06:12:37 +0000 (0:00:00.447) 0:06:28.742 ****** 2025-09-29 06:16:40.813624 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813630 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813636 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813642 | orchestrator | 2025-09-29 06:16:40.813648 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-29 06:16:40.813654 | orchestrator | Monday 29 September 2025 06:12:37 +0000 (0:00:00.309) 0:06:29.052 ****** 2025-09-29 06:16:40.813660 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813666 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813673 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813679 | orchestrator | 2025-09-29 06:16:40.813685 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-29 06:16:40.813691 | orchestrator | Monday 29 September 2025 06:12:38 +0000 (0:00:00.410) 0:06:29.462 ****** 2025-09-29 06:16:40.813697 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813703 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813709 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813715 | orchestrator | 2025-09-29 06:16:40.813721 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-29 06:16:40.813727 | orchestrator | Monday 29 September 2025 06:12:38 +0000 (0:00:00.582) 0:06:30.044 ****** 2025-09-29 06:16:40.813733 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.813739 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.813745 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.813752 | orchestrator | 2025-09-29 06:16:40.813758 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-29 06:16:40.813764 | orchestrator | Monday 29 September 2025 06:12:38 +0000 (0:00:00.295) 0:06:30.340 ****** 2025-09-29 06:16:40.813770 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-29 06:16:40.813776 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-29 06:16:40.813782 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-29 06:16:40.813788 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-29 06:16:40.813794 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-29 06:16:40.813804 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-29 06:16:40.813811 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-29 06:16:40.813817 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-29 06:16:40.813827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-29 06:16:40.813834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-29 06:16:40.813840 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-29 06:16:40.813846 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-29 06:16:40.813852 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-29 06:16:40.813858 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-29 06:16:40.813864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-29 06:16:40.813870 | orchestrator | 2025-09-29 06:16:40.813876 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-29 06:16:40.813883 | orchestrator | Monday 29 September 2025 06:12:41 +0000 (0:00:02.695) 0:06:33.035 ****** 2025-09-29 06:16:40.813889 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.813895 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.813901 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.813907 | orchestrator | 2025-09-29 06:16:40.813913 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-29 06:16:40.813920 | orchestrator | Monday 29 September 2025 06:12:42 +0000 (0:00:00.484) 0:06:33.520 ****** 2025-09-29 06:16:40.813926 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.813932 | orchestrator | 2025-09-29 06:16:40.813941 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-29 06:16:40.813948 | orchestrator | Monday 29 September 2025 06:12:42 +0000 (0:00:00.514) 0:06:34.034 ****** 2025-09-29 06:16:40.813954 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-29 06:16:40.813960 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-29 06:16:40.813966 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-29 06:16:40.813972 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-29 06:16:40.813978 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-29 06:16:40.813984 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-29 06:16:40.813991 | orchestrator | 2025-09-29 06:16:40.813997 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-29 06:16:40.814003 | orchestrator | Monday 29 September 2025 06:12:43 +0000 (0:00:00.973) 0:06:35.008 ****** 2025-09-29 06:16:40.814009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.814050 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.814058 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.814064 | orchestrator | 2025-09-29 06:16:40.814070 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-29 06:16:40.814076 | orchestrator | Monday 29 September 2025 06:12:45 +0000 (0:00:02.163) 0:06:37.171 ****** 2025-09-29 06:16:40.814082 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:16:40.814088 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.814094 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.814100 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:16:40.814106 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-29 06:16:40.814118 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.814124 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:16:40.814130 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-29 06:16:40.814136 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.814142 | orchestrator | 2025-09-29 06:16:40.814148 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-29 06:16:40.814154 | orchestrator | Monday 29 September 2025 06:12:47 +0000 (0:00:01.640) 0:06:38.812 ****** 2025-09-29 06:16:40.814160 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.814166 | orchestrator | 2025-09-29 06:16:40.814172 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-29 06:16:40.814178 | orchestrator | Monday 29 September 2025 06:12:49 +0000 (0:00:02.170) 0:06:40.983 ****** 2025-09-29 06:16:40.814184 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.814190 | orchestrator | 2025-09-29 06:16:40.814197 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-29 06:16:40.814203 | orchestrator | Monday 29 September 2025 06:12:50 +0000 (0:00:00.456) 0:06:41.440 ****** 2025-09-29 06:16:40.814209 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-da34c784-00a3-5dad-8c50-6eedba006e78', 'data_vg': 'ceph-da34c784-00a3-5dad-8c50-6eedba006e78'}) 2025-09-29 06:16:40.814216 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-6be24fb8-e256-5721-a6a2-6a7f57bf9910', 'data_vg': 'ceph-6be24fb8-e256-5721-a6a2-6a7f57bf9910'}) 2025-09-29 06:16:40.814222 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-34f4ec66-7b15-5133-bf2a-17bf3a27b54a', 'data_vg': 'ceph-34f4ec66-7b15-5133-bf2a-17bf3a27b54a'}) 2025-09-29 06:16:40.814229 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5b44ac90-f026-5081-896e-3232400f6176', 'data_vg': 'ceph-5b44ac90-f026-5081-896e-3232400f6176'}) 2025-09-29 06:16:40.814239 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ed2553fc-8d98-5289-a275-720d5101f8b0', 'data_vg': 'ceph-ed2553fc-8d98-5289-a275-720d5101f8b0'}) 2025-09-29 06:16:40.814246 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-46f249ea-6148-566c-bc01-762c6d5847ca', 'data_vg': 'ceph-46f249ea-6148-566c-bc01-762c6d5847ca'}) 2025-09-29 06:16:40.814252 | orchestrator | 2025-09-29 06:16:40.814258 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-29 06:16:40.814264 | orchestrator | Monday 29 September 2025 06:13:27 +0000 (0:00:37.442) 0:07:18.883 ****** 2025-09-29 06:16:40.814270 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814276 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814283 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.814289 | orchestrator | 2025-09-29 06:16:40.814295 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-29 06:16:40.814301 | orchestrator | Monday 29 September 2025 06:13:28 +0000 (0:00:00.605) 0:07:19.488 ****** 2025-09-29 06:16:40.814307 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.814313 | orchestrator | 2025-09-29 06:16:40.814319 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-29 06:16:40.814325 | orchestrator | Monday 29 September 2025 06:13:28 +0000 (0:00:00.616) 0:07:20.105 ****** 2025-09-29 06:16:40.814331 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.814337 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.814344 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.814350 | orchestrator | 2025-09-29 06:16:40.814356 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-29 06:16:40.814365 | orchestrator | Monday 29 September 2025 06:13:29 +0000 (0:00:00.644) 0:07:20.749 ****** 2025-09-29 06:16:40.814372 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.814382 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.814388 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.814394 | orchestrator | 2025-09-29 06:16:40.814400 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-29 06:16:40.814406 | orchestrator | Monday 29 September 2025 06:13:32 +0000 (0:00:02.795) 0:07:23.545 ****** 2025-09-29 06:16:40.814412 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.814418 | orchestrator | 2025-09-29 06:16:40.814424 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-29 06:16:40.814430 | orchestrator | Monday 29 September 2025 06:13:32 +0000 (0:00:00.534) 0:07:24.080 ****** 2025-09-29 06:16:40.814437 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.814443 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.814449 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.814455 | orchestrator | 2025-09-29 06:16:40.814476 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-29 06:16:40.814482 | orchestrator | Monday 29 September 2025 06:13:33 +0000 (0:00:01.160) 0:07:25.240 ****** 2025-09-29 06:16:40.814488 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.814494 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.814500 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.814506 | orchestrator | 2025-09-29 06:16:40.814512 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-29 06:16:40.814518 | orchestrator | Monday 29 September 2025 06:13:35 +0000 (0:00:01.406) 0:07:26.646 ****** 2025-09-29 06:16:40.814524 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.814530 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.814536 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.814542 | orchestrator | 2025-09-29 06:16:40.814548 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-29 06:16:40.814554 | orchestrator | Monday 29 September 2025 06:13:36 +0000 (0:00:01.732) 0:07:28.378 ****** 2025-09-29 06:16:40.814560 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814566 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814572 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.814578 | orchestrator | 2025-09-29 06:16:40.814584 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-29 06:16:40.814591 | orchestrator | Monday 29 September 2025 06:13:37 +0000 (0:00:00.311) 0:07:28.690 ****** 2025-09-29 06:16:40.814596 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814602 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814609 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.814615 | orchestrator | 2025-09-29 06:16:40.814621 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-29 06:16:40.814627 | orchestrator | Monday 29 September 2025 06:13:37 +0000 (0:00:00.309) 0:07:29.000 ****** 2025-09-29 06:16:40.814633 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-09-29 06:16:40.814639 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-29 06:16:40.814645 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-29 06:16:40.814651 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-29 06:16:40.814657 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-29 06:16:40.814663 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-29 06:16:40.814669 | orchestrator | 2025-09-29 06:16:40.814675 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-29 06:16:40.814681 | orchestrator | Monday 29 September 2025 06:13:38 +0000 (0:00:01.396) 0:07:30.397 ****** 2025-09-29 06:16:40.814687 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-29 06:16:40.814693 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-29 06:16:40.814699 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-29 06:16:40.814705 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-29 06:16:40.814712 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-29 06:16:40.814724 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-29 06:16:40.814730 | orchestrator | 2025-09-29 06:16:40.814736 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-29 06:16:40.814743 | orchestrator | Monday 29 September 2025 06:13:41 +0000 (0:00:02.174) 0:07:32.571 ****** 2025-09-29 06:16:40.814749 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-29 06:16:40.814755 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-29 06:16:40.814764 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-29 06:16:40.814771 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-29 06:16:40.814777 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-29 06:16:40.814783 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-29 06:16:40.814789 | orchestrator | 2025-09-29 06:16:40.814795 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-29 06:16:40.814801 | orchestrator | Monday 29 September 2025 06:13:44 +0000 (0:00:03.601) 0:07:36.173 ****** 2025-09-29 06:16:40.814807 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814813 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814819 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.814825 | orchestrator | 2025-09-29 06:16:40.814831 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-29 06:16:40.814837 | orchestrator | Monday 29 September 2025 06:13:47 +0000 (0:00:02.750) 0:07:38.923 ****** 2025-09-29 06:16:40.814843 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814850 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814856 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-29 06:16:40.814862 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.814868 | orchestrator | 2025-09-29 06:16:40.814874 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-29 06:16:40.814880 | orchestrator | Monday 29 September 2025 06:14:00 +0000 (0:00:12.696) 0:07:51.620 ****** 2025-09-29 06:16:40.814886 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814892 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814902 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.814908 | orchestrator | 2025-09-29 06:16:40.814914 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.814920 | orchestrator | Monday 29 September 2025 06:14:01 +0000 (0:00:00.888) 0:07:52.508 ****** 2025-09-29 06:16:40.814926 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.814932 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.814938 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.814944 | orchestrator | 2025-09-29 06:16:40.814950 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-29 06:16:40.814957 | orchestrator | Monday 29 September 2025 06:14:01 +0000 (0:00:00.585) 0:07:53.094 ****** 2025-09-29 06:16:40.814963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.814969 | orchestrator | 2025-09-29 06:16:40.814975 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-29 06:16:40.814981 | orchestrator | Monday 29 September 2025 06:14:02 +0000 (0:00:00.547) 0:07:53.642 ****** 2025-09-29 06:16:40.814987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.814993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.814999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.815005 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815011 | orchestrator | 2025-09-29 06:16:40.815017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-29 06:16:40.815023 | orchestrator | Monday 29 September 2025 06:14:02 +0000 (0:00:00.405) 0:07:54.048 ****** 2025-09-29 06:16:40.815029 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815039 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815045 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815051 | orchestrator | 2025-09-29 06:16:40.815058 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-29 06:16:40.815064 | orchestrator | Monday 29 September 2025 06:14:02 +0000 (0:00:00.324) 0:07:54.372 ****** 2025-09-29 06:16:40.815070 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815076 | orchestrator | 2025-09-29 06:16:40.815082 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-29 06:16:40.815088 | orchestrator | Monday 29 September 2025 06:14:03 +0000 (0:00:00.812) 0:07:55.184 ****** 2025-09-29 06:16:40.815094 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815100 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815106 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815112 | orchestrator | 2025-09-29 06:16:40.815118 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-29 06:16:40.815124 | orchestrator | Monday 29 September 2025 06:14:04 +0000 (0:00:00.301) 0:07:55.486 ****** 2025-09-29 06:16:40.815130 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815136 | orchestrator | 2025-09-29 06:16:40.815142 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-29 06:16:40.815148 | orchestrator | Monday 29 September 2025 06:14:04 +0000 (0:00:00.225) 0:07:55.711 ****** 2025-09-29 06:16:40.815154 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815160 | orchestrator | 2025-09-29 06:16:40.815166 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-29 06:16:40.815172 | orchestrator | Monday 29 September 2025 06:14:04 +0000 (0:00:00.217) 0:07:55.929 ****** 2025-09-29 06:16:40.815178 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815184 | orchestrator | 2025-09-29 06:16:40.815190 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-29 06:16:40.815196 | orchestrator | Monday 29 September 2025 06:14:04 +0000 (0:00:00.120) 0:07:56.049 ****** 2025-09-29 06:16:40.815203 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815209 | orchestrator | 2025-09-29 06:16:40.815215 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-29 06:16:40.815221 | orchestrator | Monday 29 September 2025 06:14:04 +0000 (0:00:00.186) 0:07:56.236 ****** 2025-09-29 06:16:40.815227 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815233 | orchestrator | 2025-09-29 06:16:40.815239 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-29 06:16:40.815245 | orchestrator | Monday 29 September 2025 06:14:05 +0000 (0:00:00.196) 0:07:56.433 ****** 2025-09-29 06:16:40.815255 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.815261 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.815267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.815273 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815279 | orchestrator | 2025-09-29 06:16:40.815286 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-29 06:16:40.815292 | orchestrator | Monday 29 September 2025 06:14:05 +0000 (0:00:00.422) 0:07:56.855 ****** 2025-09-29 06:16:40.815298 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815304 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815310 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815316 | orchestrator | 2025-09-29 06:16:40.815322 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-29 06:16:40.815328 | orchestrator | Monday 29 September 2025 06:14:06 +0000 (0:00:00.579) 0:07:57.434 ****** 2025-09-29 06:16:40.815334 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815340 | orchestrator | 2025-09-29 06:16:40.815346 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-29 06:16:40.815352 | orchestrator | Monday 29 September 2025 06:14:06 +0000 (0:00:00.247) 0:07:57.682 ****** 2025-09-29 06:16:40.815362 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815368 | orchestrator | 2025-09-29 06:16:40.815374 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-29 06:16:40.815380 | orchestrator | 2025-09-29 06:16:40.815386 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.815393 | orchestrator | Monday 29 September 2025 06:14:06 +0000 (0:00:00.657) 0:07:58.339 ****** 2025-09-29 06:16:40.815402 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.815409 | orchestrator | 2025-09-29 06:16:40.815415 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.815421 | orchestrator | Monday 29 September 2025 06:14:08 +0000 (0:00:01.224) 0:07:59.564 ****** 2025-09-29 06:16:40.815427 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.815433 | orchestrator | 2025-09-29 06:16:40.815439 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.815445 | orchestrator | Monday 29 September 2025 06:14:09 +0000 (0:00:01.252) 0:08:00.817 ****** 2025-09-29 06:16:40.815451 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815470 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815477 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.815483 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.815489 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.815495 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815501 | orchestrator | 2025-09-29 06:16:40.815507 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.815513 | orchestrator | Monday 29 September 2025 06:14:10 +0000 (0:00:00.816) 0:08:01.634 ****** 2025-09-29 06:16:40.815519 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815525 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815531 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815537 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.815543 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.815549 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.815555 | orchestrator | 2025-09-29 06:16:40.815561 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.815568 | orchestrator | Monday 29 September 2025 06:14:11 +0000 (0:00:01.040) 0:08:02.674 ****** 2025-09-29 06:16:40.815574 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815580 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815586 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815592 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.815598 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.815604 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.815610 | orchestrator | 2025-09-29 06:16:40.815616 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.815622 | orchestrator | Monday 29 September 2025 06:14:12 +0000 (0:00:01.328) 0:08:04.003 ****** 2025-09-29 06:16:40.815628 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815634 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815640 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815646 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.815652 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.815658 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.815664 | orchestrator | 2025-09-29 06:16:40.815670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.815676 | orchestrator | Monday 29 September 2025 06:14:13 +0000 (0:00:00.968) 0:08:04.971 ****** 2025-09-29 06:16:40.815682 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815688 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815694 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815705 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.815711 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.815717 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.815723 | orchestrator | 2025-09-29 06:16:40.815729 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.815735 | orchestrator | Monday 29 September 2025 06:14:14 +0000 (0:00:00.787) 0:08:05.758 ****** 2025-09-29 06:16:40.815741 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815747 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815753 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815759 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815765 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815771 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815777 | orchestrator | 2025-09-29 06:16:40.815783 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.815789 | orchestrator | Monday 29 September 2025 06:14:14 +0000 (0:00:00.501) 0:08:06.260 ****** 2025-09-29 06:16:40.815798 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815804 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815810 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815816 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815822 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815828 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.815834 | orchestrator | 2025-09-29 06:16:40.815840 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.815846 | orchestrator | Monday 29 September 2025 06:14:15 +0000 (0:00:00.643) 0:08:06.903 ****** 2025-09-29 06:16:40.815852 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.815858 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.815865 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.815871 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.815877 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.815883 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.815889 | orchestrator | 2025-09-29 06:16:40.815895 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.815901 | orchestrator | Monday 29 September 2025 06:14:16 +0000 (0:00:01.141) 0:08:08.045 ****** 2025-09-29 06:16:40.815907 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.815913 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.815919 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.815925 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.815931 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.815937 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.815943 | orchestrator | 2025-09-29 06:16:40.815949 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.815955 | orchestrator | Monday 29 September 2025 06:14:17 +0000 (0:00:00.945) 0:08:08.990 ****** 2025-09-29 06:16:40.815961 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.815967 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.815977 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.815983 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.815989 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.815995 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.816001 | orchestrator | 2025-09-29 06:16:40.816007 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.816013 | orchestrator | Monday 29 September 2025 06:14:18 +0000 (0:00:00.734) 0:08:09.724 ****** 2025-09-29 06:16:40.816019 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816025 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816031 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816037 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.816043 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.816049 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.816055 | orchestrator | 2025-09-29 06:16:40.816061 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.816072 | orchestrator | Monday 29 September 2025 06:14:18 +0000 (0:00:00.480) 0:08:10.205 ****** 2025-09-29 06:16:40.816078 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.816084 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.816090 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.816096 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816102 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816108 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816114 | orchestrator | 2025-09-29 06:16:40.816121 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.816127 | orchestrator | Monday 29 September 2025 06:14:19 +0000 (0:00:00.661) 0:08:10.867 ****** 2025-09-29 06:16:40.816133 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.816139 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.816145 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.816151 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816157 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816163 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816169 | orchestrator | 2025-09-29 06:16:40.816175 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.816181 | orchestrator | Monday 29 September 2025 06:14:19 +0000 (0:00:00.518) 0:08:11.385 ****** 2025-09-29 06:16:40.816187 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.816193 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.816199 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.816205 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816211 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816217 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816223 | orchestrator | 2025-09-29 06:16:40.816229 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.816235 | orchestrator | Monday 29 September 2025 06:14:20 +0000 (0:00:00.666) 0:08:12.052 ****** 2025-09-29 06:16:40.816242 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.816248 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.816254 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.816260 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.816266 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.816272 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.816278 | orchestrator | 2025-09-29 06:16:40.816284 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.816290 | orchestrator | Monday 29 September 2025 06:14:21 +0000 (0:00:00.512) 0:08:12.564 ****** 2025-09-29 06:16:40.816296 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:16:40.816302 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:16:40.816308 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:16:40.816314 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.816320 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.816326 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.816332 | orchestrator | 2025-09-29 06:16:40.816339 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.816345 | orchestrator | Monday 29 September 2025 06:14:21 +0000 (0:00:00.635) 0:08:13.200 ****** 2025-09-29 06:16:40.816351 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816357 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816363 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816369 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.816375 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.816381 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.816387 | orchestrator | 2025-09-29 06:16:40.816393 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.816402 | orchestrator | Monday 29 September 2025 06:14:22 +0000 (0:00:00.483) 0:08:13.683 ****** 2025-09-29 06:16:40.816408 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816419 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816425 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816431 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816437 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816443 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816449 | orchestrator | 2025-09-29 06:16:40.816455 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.816474 | orchestrator | Monday 29 September 2025 06:14:22 +0000 (0:00:00.677) 0:08:14.361 ****** 2025-09-29 06:16:40.816480 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816486 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816492 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816498 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816504 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816510 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816516 | orchestrator | 2025-09-29 06:16:40.816523 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-29 06:16:40.816529 | orchestrator | Monday 29 September 2025 06:14:23 +0000 (0:00:00.997) 0:08:15.358 ****** 2025-09-29 06:16:40.816535 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.816541 | orchestrator | 2025-09-29 06:16:40.816547 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-29 06:16:40.816553 | orchestrator | Monday 29 September 2025 06:14:27 +0000 (0:00:03.756) 0:08:19.115 ****** 2025-09-29 06:16:40.816559 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816565 | orchestrator | 2025-09-29 06:16:40.816571 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-29 06:16:40.816577 | orchestrator | Monday 29 September 2025 06:14:29 +0000 (0:00:01.839) 0:08:20.954 ****** 2025-09-29 06:16:40.816583 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816592 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.816599 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.816605 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.816611 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.816617 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.816623 | orchestrator | 2025-09-29 06:16:40.816629 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-29 06:16:40.816635 | orchestrator | Monday 29 September 2025 06:14:30 +0000 (0:00:01.321) 0:08:22.276 ****** 2025-09-29 06:16:40.816641 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.816647 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.816653 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.816659 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.816665 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.816671 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.816677 | orchestrator | 2025-09-29 06:16:40.816683 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-29 06:16:40.816689 | orchestrator | Monday 29 September 2025 06:14:32 +0000 (0:00:01.209) 0:08:23.485 ****** 2025-09-29 06:16:40.816695 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.816702 | orchestrator | 2025-09-29 06:16:40.816708 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-29 06:16:40.816714 | orchestrator | Monday 29 September 2025 06:14:33 +0000 (0:00:01.199) 0:08:24.685 ****** 2025-09-29 06:16:40.816720 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.816726 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.816732 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.816738 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.816744 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.816750 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.816756 | orchestrator | 2025-09-29 06:16:40.816762 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-29 06:16:40.816768 | orchestrator | Monday 29 September 2025 06:14:34 +0000 (0:00:01.467) 0:08:26.152 ****** 2025-09-29 06:16:40.816779 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.816785 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.816791 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.816797 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.816803 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.816809 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.816815 | orchestrator | 2025-09-29 06:16:40.816821 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-29 06:16:40.816827 | orchestrator | Monday 29 September 2025 06:14:38 +0000 (0:00:03.761) 0:08:29.914 ****** 2025-09-29 06:16:40.816833 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.816839 | orchestrator | 2025-09-29 06:16:40.816846 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-29 06:16:40.816852 | orchestrator | Monday 29 September 2025 06:14:39 +0000 (0:00:01.062) 0:08:30.976 ****** 2025-09-29 06:16:40.816858 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816864 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816870 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816876 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816882 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.816888 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.816893 | orchestrator | 2025-09-29 06:16:40.816900 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-29 06:16:40.816906 | orchestrator | Monday 29 September 2025 06:14:40 +0000 (0:00:00.541) 0:08:31.518 ****** 2025-09-29 06:16:40.816912 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:16:40.816918 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:16:40.816924 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:16:40.816930 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.816936 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.816942 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.816948 | orchestrator | 2025-09-29 06:16:40.816954 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-29 06:16:40.816960 | orchestrator | Monday 29 September 2025 06:14:42 +0000 (0:00:02.217) 0:08:33.735 ****** 2025-09-29 06:16:40.816969 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:16:40.816976 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:16:40.816982 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:16:40.816988 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.816994 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817000 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817006 | orchestrator | 2025-09-29 06:16:40.817012 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-29 06:16:40.817018 | orchestrator | 2025-09-29 06:16:40.817024 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.817030 | orchestrator | Monday 29 September 2025 06:14:43 +0000 (0:00:01.096) 0:08:34.832 ****** 2025-09-29 06:16:40.817037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.817043 | orchestrator | 2025-09-29 06:16:40.817049 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.817055 | orchestrator | Monday 29 September 2025 06:14:43 +0000 (0:00:00.429) 0:08:35.261 ****** 2025-09-29 06:16:40.817061 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.817067 | orchestrator | 2025-09-29 06:16:40.817073 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.817079 | orchestrator | Monday 29 September 2025 06:14:44 +0000 (0:00:00.583) 0:08:35.845 ****** 2025-09-29 06:16:40.817085 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817095 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817102 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817107 | orchestrator | 2025-09-29 06:16:40.817117 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.817123 | orchestrator | Monday 29 September 2025 06:14:44 +0000 (0:00:00.264) 0:08:36.109 ****** 2025-09-29 06:16:40.817129 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817135 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817141 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817147 | orchestrator | 2025-09-29 06:16:40.817153 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.817160 | orchestrator | Monday 29 September 2025 06:14:45 +0000 (0:00:00.662) 0:08:36.771 ****** 2025-09-29 06:16:40.817166 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817172 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817178 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817184 | orchestrator | 2025-09-29 06:16:40.817190 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.817196 | orchestrator | Monday 29 September 2025 06:14:46 +0000 (0:00:00.683) 0:08:37.454 ****** 2025-09-29 06:16:40.817202 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817209 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817214 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817221 | orchestrator | 2025-09-29 06:16:40.817227 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.817233 | orchestrator | Monday 29 September 2025 06:14:47 +0000 (0:00:01.000) 0:08:38.454 ****** 2025-09-29 06:16:40.817239 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817245 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817251 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817257 | orchestrator | 2025-09-29 06:16:40.817263 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.817269 | orchestrator | Monday 29 September 2025 06:14:47 +0000 (0:00:00.321) 0:08:38.776 ****** 2025-09-29 06:16:40.817275 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817282 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817288 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817294 | orchestrator | 2025-09-29 06:16:40.817300 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.817306 | orchestrator | Monday 29 September 2025 06:14:47 +0000 (0:00:00.260) 0:08:39.037 ****** 2025-09-29 06:16:40.817312 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817319 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817325 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817331 | orchestrator | 2025-09-29 06:16:40.817337 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.817343 | orchestrator | Monday 29 September 2025 06:14:47 +0000 (0:00:00.275) 0:08:39.313 ****** 2025-09-29 06:16:40.817349 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817355 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817361 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817367 | orchestrator | 2025-09-29 06:16:40.817373 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.817379 | orchestrator | Monday 29 September 2025 06:14:48 +0000 (0:00:00.871) 0:08:40.185 ****** 2025-09-29 06:16:40.817385 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817391 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817397 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817403 | orchestrator | 2025-09-29 06:16:40.817409 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.817416 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.683) 0:08:40.869 ****** 2025-09-29 06:16:40.817422 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817428 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817434 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817444 | orchestrator | 2025-09-29 06:16:40.817450 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.817471 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.245) 0:08:41.114 ****** 2025-09-29 06:16:40.817477 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817483 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817489 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817495 | orchestrator | 2025-09-29 06:16:40.817502 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.817508 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.251) 0:08:41.366 ****** 2025-09-29 06:16:40.817514 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817520 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817526 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817532 | orchestrator | 2025-09-29 06:16:40.817541 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.817548 | orchestrator | Monday 29 September 2025 06:14:50 +0000 (0:00:00.487) 0:08:41.854 ****** 2025-09-29 06:16:40.817554 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817560 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817566 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817572 | orchestrator | 2025-09-29 06:16:40.817578 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.817584 | orchestrator | Monday 29 September 2025 06:14:50 +0000 (0:00:00.321) 0:08:42.175 ****** 2025-09-29 06:16:40.817590 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817596 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817602 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817608 | orchestrator | 2025-09-29 06:16:40.817614 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.817620 | orchestrator | Monday 29 September 2025 06:14:51 +0000 (0:00:00.344) 0:08:42.520 ****** 2025-09-29 06:16:40.817626 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817633 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817639 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817645 | orchestrator | 2025-09-29 06:16:40.817651 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.817657 | orchestrator | Monday 29 September 2025 06:14:51 +0000 (0:00:00.273) 0:08:42.793 ****** 2025-09-29 06:16:40.817663 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817669 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817675 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817681 | orchestrator | 2025-09-29 06:16:40.817687 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.817697 | orchestrator | Monday 29 September 2025 06:14:51 +0000 (0:00:00.527) 0:08:43.320 ****** 2025-09-29 06:16:40.817703 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817709 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817715 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817721 | orchestrator | 2025-09-29 06:16:40.817727 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.817733 | orchestrator | Monday 29 September 2025 06:14:52 +0000 (0:00:00.500) 0:08:43.821 ****** 2025-09-29 06:16:40.817739 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817746 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817752 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817758 | orchestrator | 2025-09-29 06:16:40.817764 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.817770 | orchestrator | Monday 29 September 2025 06:14:52 +0000 (0:00:00.317) 0:08:44.138 ****** 2025-09-29 06:16:40.817776 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.817782 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.817788 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.817794 | orchestrator | 2025-09-29 06:16:40.817800 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-29 06:16:40.817813 | orchestrator | Monday 29 September 2025 06:14:53 +0000 (0:00:00.755) 0:08:44.893 ****** 2025-09-29 06:16:40.817819 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.817826 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.817836 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-29 06:16:40.817846 | orchestrator | 2025-09-29 06:16:40.817856 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-29 06:16:40.817874 | orchestrator | Monday 29 September 2025 06:14:53 +0000 (0:00:00.477) 0:08:45.371 ****** 2025-09-29 06:16:40.817885 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.817893 | orchestrator | 2025-09-29 06:16:40.817903 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-29 06:16:40.817912 | orchestrator | Monday 29 September 2025 06:14:56 +0000 (0:00:02.187) 0:08:47.559 ****** 2025-09-29 06:16:40.817923 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-29 06:16:40.817935 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.817944 | orchestrator | 2025-09-29 06:16:40.817953 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-29 06:16:40.817962 | orchestrator | Monday 29 September 2025 06:14:56 +0000 (0:00:00.222) 0:08:47.782 ****** 2025-09-29 06:16:40.817972 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:16:40.817988 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:16:40.817998 | orchestrator | 2025-09-29 06:16:40.818008 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-29 06:16:40.818045 | orchestrator | Monday 29 September 2025 06:15:04 +0000 (0:00:08.146) 0:08:55.928 ****** 2025-09-29 06:16:40.818057 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:16:40.818067 | orchestrator | 2025-09-29 06:16:40.818077 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-29 06:16:40.818087 | orchestrator | Monday 29 September 2025 06:15:08 +0000 (0:00:04.102) 0:09:00.030 ****** 2025-09-29 06:16:40.818095 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818101 | orchestrator | 2025-09-29 06:16:40.818114 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-29 06:16:40.818120 | orchestrator | Monday 29 September 2025 06:15:09 +0000 (0:00:00.627) 0:09:00.658 ****** 2025-09-29 06:16:40.818126 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-29 06:16:40.818132 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-29 06:16:40.818138 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-29 06:16:40.818144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-29 06:16:40.818150 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-29 06:16:40.818157 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-29 06:16:40.818163 | orchestrator | 2025-09-29 06:16:40.818169 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-29 06:16:40.818175 | orchestrator | Monday 29 September 2025 06:15:10 +0000 (0:00:01.016) 0:09:01.675 ****** 2025-09-29 06:16:40.818181 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.818194 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.818200 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.818206 | orchestrator | 2025-09-29 06:16:40.818212 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-29 06:16:40.818219 | orchestrator | Monday 29 September 2025 06:15:12 +0000 (0:00:02.219) 0:09:03.895 ****** 2025-09-29 06:16:40.818229 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:16:40.818236 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.818242 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818248 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:16:40.818254 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-29 06:16:40.818260 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818266 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:16:40.818272 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-29 06:16:40.818278 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818285 | orchestrator | 2025-09-29 06:16:40.818291 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-29 06:16:40.818297 | orchestrator | Monday 29 September 2025 06:15:13 +0000 (0:00:01.212) 0:09:05.108 ****** 2025-09-29 06:16:40.818303 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818309 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818315 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818321 | orchestrator | 2025-09-29 06:16:40.818327 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-29 06:16:40.818333 | orchestrator | Monday 29 September 2025 06:15:16 +0000 (0:00:02.989) 0:09:08.098 ****** 2025-09-29 06:16:40.818339 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.818345 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.818352 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.818358 | orchestrator | 2025-09-29 06:16:40.818364 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-29 06:16:40.818370 | orchestrator | Monday 29 September 2025 06:15:16 +0000 (0:00:00.258) 0:09:08.357 ****** 2025-09-29 06:16:40.818376 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818383 | orchestrator | 2025-09-29 06:16:40.818389 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-29 06:16:40.818395 | orchestrator | Monday 29 September 2025 06:15:17 +0000 (0:00:00.524) 0:09:08.881 ****** 2025-09-29 06:16:40.818401 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818407 | orchestrator | 2025-09-29 06:16:40.818413 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-29 06:16:40.818419 | orchestrator | Monday 29 September 2025 06:15:18 +0000 (0:00:00.627) 0:09:09.508 ****** 2025-09-29 06:16:40.818426 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818432 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818438 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818444 | orchestrator | 2025-09-29 06:16:40.818450 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-29 06:16:40.818469 | orchestrator | Monday 29 September 2025 06:15:19 +0000 (0:00:01.171) 0:09:10.680 ****** 2025-09-29 06:16:40.818476 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818482 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818488 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818494 | orchestrator | 2025-09-29 06:16:40.818500 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-29 06:16:40.818506 | orchestrator | Monday 29 September 2025 06:15:20 +0000 (0:00:01.061) 0:09:11.741 ****** 2025-09-29 06:16:40.818512 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818523 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818529 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818535 | orchestrator | 2025-09-29 06:16:40.818541 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-29 06:16:40.818547 | orchestrator | Monday 29 September 2025 06:15:22 +0000 (0:00:01.866) 0:09:13.608 ****** 2025-09-29 06:16:40.818554 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818560 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818566 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818572 | orchestrator | 2025-09-29 06:16:40.818578 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-29 06:16:40.818584 | orchestrator | Monday 29 September 2025 06:15:24 +0000 (0:00:01.963) 0:09:15.571 ****** 2025-09-29 06:16:40.818590 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.818596 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.818602 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.818608 | orchestrator | 2025-09-29 06:16:40.818618 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.818625 | orchestrator | Monday 29 September 2025 06:15:25 +0000 (0:00:01.280) 0:09:16.851 ****** 2025-09-29 06:16:40.818631 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818637 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818643 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818649 | orchestrator | 2025-09-29 06:16:40.818655 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-29 06:16:40.818661 | orchestrator | Monday 29 September 2025 06:15:26 +0000 (0:00:00.605) 0:09:17.456 ****** 2025-09-29 06:16:40.818667 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818674 | orchestrator | 2025-09-29 06:16:40.818680 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-29 06:16:40.818686 | orchestrator | Monday 29 September 2025 06:15:26 +0000 (0:00:00.484) 0:09:17.941 ****** 2025-09-29 06:16:40.818692 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.818698 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.818704 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.818710 | orchestrator | 2025-09-29 06:16:40.818716 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-29 06:16:40.818722 | orchestrator | Monday 29 September 2025 06:15:26 +0000 (0:00:00.403) 0:09:18.344 ****** 2025-09-29 06:16:40.818729 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.818735 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.818741 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.818747 | orchestrator | 2025-09-29 06:16:40.818753 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-29 06:16:40.818763 | orchestrator | Monday 29 September 2025 06:15:28 +0000 (0:00:01.158) 0:09:19.502 ****** 2025-09-29 06:16:40.818769 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.818775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.818781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.818787 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.818793 | orchestrator | 2025-09-29 06:16:40.818800 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-29 06:16:40.818806 | orchestrator | Monday 29 September 2025 06:15:28 +0000 (0:00:00.537) 0:09:20.040 ****** 2025-09-29 06:16:40.818812 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.818818 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.818824 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.818830 | orchestrator | 2025-09-29 06:16:40.818836 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-29 06:16:40.818842 | orchestrator | 2025-09-29 06:16:40.818848 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-29 06:16:40.818854 | orchestrator | Monday 29 September 2025 06:15:29 +0000 (0:00:00.452) 0:09:20.492 ****** 2025-09-29 06:16:40.818864 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818871 | orchestrator | 2025-09-29 06:16:40.818877 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-29 06:16:40.818883 | orchestrator | Monday 29 September 2025 06:15:29 +0000 (0:00:00.576) 0:09:21.068 ****** 2025-09-29 06:16:40.818889 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.818895 | orchestrator | 2025-09-29 06:16:40.818901 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-29 06:16:40.818907 | orchestrator | Monday 29 September 2025 06:15:30 +0000 (0:00:00.431) 0:09:21.500 ****** 2025-09-29 06:16:40.818913 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.818919 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.818925 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.818931 | orchestrator | 2025-09-29 06:16:40.818937 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-29 06:16:40.818943 | orchestrator | Monday 29 September 2025 06:15:30 +0000 (0:00:00.400) 0:09:21.900 ****** 2025-09-29 06:16:40.818949 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.818955 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.818962 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.818968 | orchestrator | 2025-09-29 06:16:40.818974 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-29 06:16:40.818980 | orchestrator | Monday 29 September 2025 06:15:31 +0000 (0:00:00.637) 0:09:22.538 ****** 2025-09-29 06:16:40.818986 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.818992 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.818998 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819004 | orchestrator | 2025-09-29 06:16:40.819010 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-29 06:16:40.819016 | orchestrator | Monday 29 September 2025 06:15:31 +0000 (0:00:00.685) 0:09:23.223 ****** 2025-09-29 06:16:40.819022 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819028 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819034 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819040 | orchestrator | 2025-09-29 06:16:40.819046 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-29 06:16:40.819052 | orchestrator | Monday 29 September 2025 06:15:32 +0000 (0:00:00.657) 0:09:23.881 ****** 2025-09-29 06:16:40.819058 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819064 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819070 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819076 | orchestrator | 2025-09-29 06:16:40.819082 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-29 06:16:40.819088 | orchestrator | Monday 29 September 2025 06:15:32 +0000 (0:00:00.456) 0:09:24.338 ****** 2025-09-29 06:16:40.819095 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819101 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819107 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819113 | orchestrator | 2025-09-29 06:16:40.819119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-29 06:16:40.819128 | orchestrator | Monday 29 September 2025 06:15:33 +0000 (0:00:00.310) 0:09:24.648 ****** 2025-09-29 06:16:40.819134 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819140 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819147 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819153 | orchestrator | 2025-09-29 06:16:40.819159 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-29 06:16:40.819165 | orchestrator | Monday 29 September 2025 06:15:33 +0000 (0:00:00.309) 0:09:24.958 ****** 2025-09-29 06:16:40.819171 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819177 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819187 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819194 | orchestrator | 2025-09-29 06:16:40.819200 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-29 06:16:40.819206 | orchestrator | Monday 29 September 2025 06:15:34 +0000 (0:00:00.692) 0:09:25.650 ****** 2025-09-29 06:16:40.819212 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819218 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819224 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819230 | orchestrator | 2025-09-29 06:16:40.819236 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-29 06:16:40.819242 | orchestrator | Monday 29 September 2025 06:15:35 +0000 (0:00:01.001) 0:09:26.652 ****** 2025-09-29 06:16:40.819249 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819255 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819261 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819267 | orchestrator | 2025-09-29 06:16:40.819273 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-29 06:16:40.819279 | orchestrator | Monday 29 September 2025 06:15:35 +0000 (0:00:00.307) 0:09:26.959 ****** 2025-09-29 06:16:40.819285 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819295 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819301 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819307 | orchestrator | 2025-09-29 06:16:40.819314 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-29 06:16:40.819320 | orchestrator | Monday 29 September 2025 06:15:35 +0000 (0:00:00.279) 0:09:27.239 ****** 2025-09-29 06:16:40.819326 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819332 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819338 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819344 | orchestrator | 2025-09-29 06:16:40.819350 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-29 06:16:40.819356 | orchestrator | Monday 29 September 2025 06:15:36 +0000 (0:00:00.347) 0:09:27.587 ****** 2025-09-29 06:16:40.819362 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819368 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819374 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819380 | orchestrator | 2025-09-29 06:16:40.819386 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-29 06:16:40.819392 | orchestrator | Monday 29 September 2025 06:15:36 +0000 (0:00:00.575) 0:09:28.163 ****** 2025-09-29 06:16:40.819398 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819404 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819410 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819416 | orchestrator | 2025-09-29 06:16:40.819422 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-29 06:16:40.819428 | orchestrator | Monday 29 September 2025 06:15:37 +0000 (0:00:00.313) 0:09:28.476 ****** 2025-09-29 06:16:40.819434 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819441 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819447 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819453 | orchestrator | 2025-09-29 06:16:40.819473 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-29 06:16:40.819480 | orchestrator | Monday 29 September 2025 06:15:37 +0000 (0:00:00.296) 0:09:28.773 ****** 2025-09-29 06:16:40.819486 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819492 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819498 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819504 | orchestrator | 2025-09-29 06:16:40.819510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-29 06:16:40.819516 | orchestrator | Monday 29 September 2025 06:15:37 +0000 (0:00:00.286) 0:09:29.060 ****** 2025-09-29 06:16:40.819522 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819528 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819534 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819540 | orchestrator | 2025-09-29 06:16:40.819550 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-29 06:16:40.819556 | orchestrator | Monday 29 September 2025 06:15:38 +0000 (0:00:00.556) 0:09:29.617 ****** 2025-09-29 06:16:40.819562 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819568 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819574 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819580 | orchestrator | 2025-09-29 06:16:40.819586 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-29 06:16:40.819592 | orchestrator | Monday 29 September 2025 06:15:38 +0000 (0:00:00.336) 0:09:29.954 ****** 2025-09-29 06:16:40.819598 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.819604 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.819610 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.819616 | orchestrator | 2025-09-29 06:16:40.819622 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-29 06:16:40.819628 | orchestrator | Monday 29 September 2025 06:15:39 +0000 (0:00:00.514) 0:09:30.469 ****** 2025-09-29 06:16:40.819635 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.819641 | orchestrator | 2025-09-29 06:16:40.819647 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-29 06:16:40.819653 | orchestrator | Monday 29 September 2025 06:15:39 +0000 (0:00:00.772) 0:09:31.241 ****** 2025-09-29 06:16:40.819659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819665 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.819671 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.819677 | orchestrator | 2025-09-29 06:16:40.819683 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-29 06:16:40.819692 | orchestrator | Monday 29 September 2025 06:15:42 +0000 (0:00:02.191) 0:09:33.433 ****** 2025-09-29 06:16:40.819699 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:16:40.819705 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-29 06:16:40.819711 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.819717 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:16:40.819723 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-29 06:16:40.819729 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.819735 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:16:40.819741 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-29 06:16:40.819747 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.819753 | orchestrator | 2025-09-29 06:16:40.819760 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-29 06:16:40.819766 | orchestrator | Monday 29 September 2025 06:15:43 +0000 (0:00:01.208) 0:09:34.641 ****** 2025-09-29 06:16:40.819772 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.819778 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.819784 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.819790 | orchestrator | 2025-09-29 06:16:40.819796 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-29 06:16:40.819802 | orchestrator | Monday 29 September 2025 06:15:43 +0000 (0:00:00.294) 0:09:34.935 ****** 2025-09-29 06:16:40.819808 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.819814 | orchestrator | 2025-09-29 06:16:40.819820 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-29 06:16:40.819830 | orchestrator | Monday 29 September 2025 06:15:44 +0000 (0:00:00.810) 0:09:35.746 ****** 2025-09-29 06:16:40.819837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.819843 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.819853 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.819859 | orchestrator | 2025-09-29 06:16:40.819865 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-29 06:16:40.819871 | orchestrator | Monday 29 September 2025 06:15:45 +0000 (0:00:00.773) 0:09:36.520 ****** 2025-09-29 06:16:40.819877 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819883 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-29 06:16:40.819889 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819895 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-29 06:16:40.819901 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819908 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-29 06:16:40.819914 | orchestrator | 2025-09-29 06:16:40.819920 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-29 06:16:40.819926 | orchestrator | Monday 29 September 2025 06:15:49 +0000 (0:00:04.581) 0:09:41.102 ****** 2025-09-29 06:16:40.819932 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819938 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.819944 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819950 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.819956 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:16:40.819962 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:16:40.819968 | orchestrator | 2025-09-29 06:16:40.819974 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-29 06:16:40.819981 | orchestrator | Monday 29 September 2025 06:15:52 +0000 (0:00:03.055) 0:09:44.157 ****** 2025-09-29 06:16:40.819986 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:16:40.819993 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.819999 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:16:40.820005 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.820011 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:16:40.820017 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.820023 | orchestrator | 2025-09-29 06:16:40.820029 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-29 06:16:40.820035 | orchestrator | Monday 29 September 2025 06:15:53 +0000 (0:00:01.209) 0:09:45.367 ****** 2025-09-29 06:16:40.820041 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-29 06:16:40.820047 | orchestrator | 2025-09-29 06:16:40.820053 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-29 06:16:40.820059 | orchestrator | Monday 29 September 2025 06:15:54 +0000 (0:00:00.227) 0:09:45.595 ****** 2025-09-29 06:16:40.820068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820103 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820109 | orchestrator | 2025-09-29 06:16:40.820115 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-29 06:16:40.820121 | orchestrator | Monday 29 September 2025 06:15:55 +0000 (0:00:00.883) 0:09:46.478 ****** 2025-09-29 06:16:40.820127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-29 06:16:40.820161 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820167 | orchestrator | 2025-09-29 06:16:40.820174 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-29 06:16:40.820180 | orchestrator | Monday 29 September 2025 06:15:55 +0000 (0:00:00.898) 0:09:47.376 ****** 2025-09-29 06:16:40.820186 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-29 06:16:40.820192 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-29 06:16:40.820198 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-29 06:16:40.820204 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-29 06:16:40.820210 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-29 06:16:40.820216 | orchestrator | 2025-09-29 06:16:40.820222 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-29 06:16:40.820228 | orchestrator | Monday 29 September 2025 06:16:27 +0000 (0:00:32.023) 0:10:19.400 ****** 2025-09-29 06:16:40.820234 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820240 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.820246 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.820252 | orchestrator | 2025-09-29 06:16:40.820259 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-29 06:16:40.820265 | orchestrator | Monday 29 September 2025 06:16:28 +0000 (0:00:00.408) 0:10:19.808 ****** 2025-09-29 06:16:40.820271 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820277 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.820283 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.820289 | orchestrator | 2025-09-29 06:16:40.820295 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-29 06:16:40.820301 | orchestrator | Monday 29 September 2025 06:16:28 +0000 (0:00:00.296) 0:10:20.105 ****** 2025-09-29 06:16:40.820307 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.820318 | orchestrator | 2025-09-29 06:16:40.820325 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-29 06:16:40.820331 | orchestrator | Monday 29 September 2025 06:16:29 +0000 (0:00:00.484) 0:10:20.589 ****** 2025-09-29 06:16:40.820337 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.820343 | orchestrator | 2025-09-29 06:16:40.820349 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-29 06:16:40.820355 | orchestrator | Monday 29 September 2025 06:16:29 +0000 (0:00:00.595) 0:10:21.184 ****** 2025-09-29 06:16:40.820361 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.820367 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.820373 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.820379 | orchestrator | 2025-09-29 06:16:40.820385 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-29 06:16:40.820391 | orchestrator | Monday 29 September 2025 06:16:30 +0000 (0:00:01.224) 0:10:22.409 ****** 2025-09-29 06:16:40.820400 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.820406 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.820412 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.820418 | orchestrator | 2025-09-29 06:16:40.820425 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-29 06:16:40.820431 | orchestrator | Monday 29 September 2025 06:16:32 +0000 (0:00:01.095) 0:10:23.505 ****** 2025-09-29 06:16:40.820437 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:16:40.820443 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:16:40.820449 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:16:40.820455 | orchestrator | 2025-09-29 06:16:40.820474 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-29 06:16:40.820481 | orchestrator | Monday 29 September 2025 06:16:33 +0000 (0:00:01.792) 0:10:25.297 ****** 2025-09-29 06:16:40.820487 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.820493 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.820499 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-29 06:16:40.820505 | orchestrator | 2025-09-29 06:16:40.820511 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-29 06:16:40.820517 | orchestrator | Monday 29 September 2025 06:16:36 +0000 (0:00:02.384) 0:10:27.682 ****** 2025-09-29 06:16:40.820523 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820532 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.820538 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.820544 | orchestrator | 2025-09-29 06:16:40.820550 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-29 06:16:40.820557 | orchestrator | Monday 29 September 2025 06:16:36 +0000 (0:00:00.418) 0:10:28.101 ****** 2025-09-29 06:16:40.820563 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:16:40.820569 | orchestrator | 2025-09-29 06:16:40.820575 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-29 06:16:40.820581 | orchestrator | Monday 29 September 2025 06:16:37 +0000 (0:00:00.457) 0:10:28.558 ****** 2025-09-29 06:16:40.820587 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.820593 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.820599 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.820605 | orchestrator | 2025-09-29 06:16:40.820611 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-29 06:16:40.820617 | orchestrator | Monday 29 September 2025 06:16:37 +0000 (0:00:00.263) 0:10:28.821 ****** 2025-09-29 06:16:40.820623 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820692 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:16:40.820698 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:16:40.820704 | orchestrator | 2025-09-29 06:16:40.820710 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-29 06:16:40.820716 | orchestrator | Monday 29 September 2025 06:16:37 +0000 (0:00:00.429) 0:10:29.251 ****** 2025-09-29 06:16:40.820722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:16:40.820729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:16:40.820735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:16:40.820741 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:16:40.820747 | orchestrator | 2025-09-29 06:16:40.820753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-29 06:16:40.820759 | orchestrator | Monday 29 September 2025 06:16:38 +0000 (0:00:00.582) 0:10:29.834 ****** 2025-09-29 06:16:40.820765 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:16:40.820771 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:16:40.820777 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:16:40.820783 | orchestrator | 2025-09-29 06:16:40.820789 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:16:40.820795 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-09-29 06:16:40.820802 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-29 06:16:40.820808 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-29 06:16:40.820814 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-09-29 06:16:40.820820 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-29 06:16:40.820826 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-29 06:16:40.820832 | orchestrator | 2025-09-29 06:16:40.820838 | orchestrator | 2025-09-29 06:16:40.820844 | orchestrator | 2025-09-29 06:16:40.820850 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:16:40.820856 | orchestrator | Monday 29 September 2025 06:16:38 +0000 (0:00:00.220) 0:10:30.054 ****** 2025-09-29 06:16:40.820863 | orchestrator | =============================================================================== 2025-09-29 06:16:40.820869 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 49.42s 2025-09-29 06:16:40.820879 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.44s 2025-09-29 06:16:40.820885 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.02s 2025-09-29 06:16:40.820891 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.59s 2025-09-29 06:16:40.820897 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.84s 2025-09-29 06:16:40.820903 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.76s 2025-09-29 06:16:40.820909 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.70s 2025-09-29 06:16:40.820916 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.59s 2025-09-29 06:16:40.820922 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.81s 2025-09-29 06:16:40.820928 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.15s 2025-09-29 06:16:40.820934 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.93s 2025-09-29 06:16:40.820940 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.49s 2025-09-29 06:16:40.820950 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.69s 2025-09-29 06:16:40.820956 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.58s 2025-09-29 06:16:40.820962 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.10s 2025-09-29 06:16:40.820972 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.03s 2025-09-29 06:16:40.820978 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.76s 2025-09-29 06:16:40.820984 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.76s 2025-09-29 06:16:40.820990 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.75s 2025-09-29 06:16:40.820996 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.71s 2025-09-29 06:16:40.821002 | orchestrator | 2025-09-29 06:16:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:43.841016 | orchestrator | 2025-09-29 06:16:43 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:43.842264 | orchestrator | 2025-09-29 06:16:43 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:43.843767 | orchestrator | 2025-09-29 06:16:43 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:43.843830 | orchestrator | 2025-09-29 06:16:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:46.898904 | orchestrator | 2025-09-29 06:16:46 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:46.900410 | orchestrator | 2025-09-29 06:16:46 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:46.902271 | orchestrator | 2025-09-29 06:16:46 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:46.902595 | orchestrator | 2025-09-29 06:16:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:49.948976 | orchestrator | 2025-09-29 06:16:49 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:49.950839 | orchestrator | 2025-09-29 06:16:49 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:49.952626 | orchestrator | 2025-09-29 06:16:49 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:49.952669 | orchestrator | 2025-09-29 06:16:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:53.006356 | orchestrator | 2025-09-29 06:16:53 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:53.007014 | orchestrator | 2025-09-29 06:16:53 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:53.008166 | orchestrator | 2025-09-29 06:16:53 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:53.008202 | orchestrator | 2025-09-29 06:16:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:56.048416 | orchestrator | 2025-09-29 06:16:56 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:56.049717 | orchestrator | 2025-09-29 06:16:56 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:56.051207 | orchestrator | 2025-09-29 06:16:56 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:56.051253 | orchestrator | 2025-09-29 06:16:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:16:59.096628 | orchestrator | 2025-09-29 06:16:59 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:16:59.098373 | orchestrator | 2025-09-29 06:16:59 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:16:59.100123 | orchestrator | 2025-09-29 06:16:59 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:16:59.100336 | orchestrator | 2025-09-29 06:16:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:02.136901 | orchestrator | 2025-09-29 06:17:02 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:02.139320 | orchestrator | 2025-09-29 06:17:02 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:02.141135 | orchestrator | 2025-09-29 06:17:02 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:02.141162 | orchestrator | 2025-09-29 06:17:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:05.184663 | orchestrator | 2025-09-29 06:17:05 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:05.189722 | orchestrator | 2025-09-29 06:17:05 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:05.192082 | orchestrator | 2025-09-29 06:17:05 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:05.192145 | orchestrator | 2025-09-29 06:17:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:08.256255 | orchestrator | 2025-09-29 06:17:08 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:08.257916 | orchestrator | 2025-09-29 06:17:08 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:08.259727 | orchestrator | 2025-09-29 06:17:08 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:08.260359 | orchestrator | 2025-09-29 06:17:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:11.320966 | orchestrator | 2025-09-29 06:17:11 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:11.322862 | orchestrator | 2025-09-29 06:17:11 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:11.325047 | orchestrator | 2025-09-29 06:17:11 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:11.325118 | orchestrator | 2025-09-29 06:17:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:14.366873 | orchestrator | 2025-09-29 06:17:14 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:14.369010 | orchestrator | 2025-09-29 06:17:14 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:14.371713 | orchestrator | 2025-09-29 06:17:14 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:14.372054 | orchestrator | 2025-09-29 06:17:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:17.423731 | orchestrator | 2025-09-29 06:17:17 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:17.424227 | orchestrator | 2025-09-29 06:17:17 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:17.426321 | orchestrator | 2025-09-29 06:17:17 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:17.426786 | orchestrator | 2025-09-29 06:17:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:20.487156 | orchestrator | 2025-09-29 06:17:20 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:20.490254 | orchestrator | 2025-09-29 06:17:20 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:20.492952 | orchestrator | 2025-09-29 06:17:20 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:20.493023 | orchestrator | 2025-09-29 06:17:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:23.538256 | orchestrator | 2025-09-29 06:17:23 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:23.541546 | orchestrator | 2025-09-29 06:17:23 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:23.542596 | orchestrator | 2025-09-29 06:17:23 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:23.542612 | orchestrator | 2025-09-29 06:17:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:26.590589 | orchestrator | 2025-09-29 06:17:26 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:26.590715 | orchestrator | 2025-09-29 06:17:26 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state STARTED 2025-09-29 06:17:26.591468 | orchestrator | 2025-09-29 06:17:26 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:26.591510 | orchestrator | 2025-09-29 06:17:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:29.632820 | orchestrator | 2025-09-29 06:17:29 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:29.635228 | orchestrator | 2025-09-29 06:17:29 | INFO  | Task 61428eb7-3418-404a-be55-80216180dd53 is in state SUCCESS 2025-09-29 06:17:29.636825 | orchestrator | 2025-09-29 06:17:29.637012 | orchestrator | 2025-09-29 06:17:29.637030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:17:29.637042 | orchestrator | 2025-09-29 06:17:29.637053 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:17:29.637064 | orchestrator | Monday 29 September 2025 06:14:31 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-29 06:17:29.637076 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:29.637088 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:29.637099 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:29.637109 | orchestrator | 2025-09-29 06:17:29.637121 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:17:29.637131 | orchestrator | Monday 29 September 2025 06:14:31 +0000 (0:00:00.265) 0:00:00.544 ****** 2025-09-29 06:17:29.637143 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-29 06:17:29.637154 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-29 06:17:29.637181 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-29 06:17:29.637193 | orchestrator | 2025-09-29 06:17:29.637204 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-29 06:17:29.637214 | orchestrator | 2025-09-29 06:17:29.637225 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-29 06:17:29.637235 | orchestrator | Monday 29 September 2025 06:14:31 +0000 (0:00:00.426) 0:00:00.970 ****** 2025-09-29 06:17:29.637246 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:29.637257 | orchestrator | 2025-09-29 06:17:29.637273 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-29 06:17:29.637292 | orchestrator | Monday 29 September 2025 06:14:32 +0000 (0:00:00.542) 0:00:01.513 ****** 2025-09-29 06:17:29.637310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 06:17:29.637329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 06:17:29.637348 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-29 06:17:29.637365 | orchestrator | 2025-09-29 06:17:29.637448 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-29 06:17:29.637469 | orchestrator | Monday 29 September 2025 06:14:32 +0000 (0:00:00.595) 0:00:02.108 ****** 2025-09-29 06:17:29.637494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637687 | orchestrator | 2025-09-29 06:17:29.637705 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-29 06:17:29.637719 | orchestrator | Monday 29 September 2025 06:14:34 +0000 (0:00:01.618) 0:00:03.726 ****** 2025-09-29 06:17:29.637731 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:29.637744 | orchestrator | 2025-09-29 06:17:29.637762 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-29 06:17:29.637781 | orchestrator | Monday 29 September 2025 06:14:35 +0000 (0:00:00.530) 0:00:04.257 ****** 2025-09-29 06:17:29.637814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.637897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.637967 | orchestrator | 2025-09-29 06:17:29.637978 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-29 06:17:29.637989 | orchestrator | Monday 29 September 2025 06:14:37 +0000 (0:00:02.344) 0:00:06.601 ****** 2025-09-29 06:17:29.638000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638081 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:29.638094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638140 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:29.638152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638175 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:29.638187 | orchestrator | 2025-09-29 06:17:29.638198 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-29 06:17:29.638209 | orchestrator | Monday 29 September 2025 06:14:38 +0000 (0:00:00.963) 0:00:07.565 ****** 2025-09-29 06:17:29.638220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638263 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:29.638274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638298 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:29.638311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-29 06:17:29.638349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-29 06:17:29.638379 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:29.638440 | orchestrator | 2025-09-29 06:17:29.638459 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-29 06:17:29.638476 | orchestrator | Monday 29 September 2025 06:14:39 +0000 (0:00:00.983) 0:00:08.548 ****** 2025-09-29 06:17:29.638496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638665 | orchestrator | 2025-09-29 06:17:29.638684 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-29 06:17:29.638696 | orchestrator | Monday 29 September 2025 06:14:41 +0000 (0:00:02.466) 0:00:11.015 ****** 2025-09-29 06:17:29.638707 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.638718 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:29.638728 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:29.638739 | orchestrator | 2025-09-29 06:17:29.638750 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-29 06:17:29.638760 | orchestrator | Monday 29 September 2025 06:14:44 +0000 (0:00:03.134) 0:00:14.149 ****** 2025-09-29 06:17:29.638771 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.638781 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:29.638792 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:29.638802 | orchestrator | 2025-09-29 06:17:29.638813 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-29 06:17:29.638824 | orchestrator | Monday 29 September 2025 06:14:46 +0000 (0:00:01.718) 0:00:15.868 ****** 2025-09-29 06:17:29.638835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-29 06:17:29.638894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-29 06:17:29.638952 | orchestrator | 2025-09-29 06:17:29.638962 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-29 06:17:29.638973 | orchestrator | Monday 29 September 2025 06:14:48 +0000 (0:00:02.328) 0:00:18.197 ****** 2025-09-29 06:17:29.638984 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:29.638995 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:29.639005 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:29.639016 | orchestrator | 2025-09-29 06:17:29.639026 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-29 06:17:29.639037 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.249) 0:00:18.447 ****** 2025-09-29 06:17:29.639047 | orchestrator | 2025-09-29 06:17:29.639058 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-29 06:17:29.639068 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.059) 0:00:18.506 ****** 2025-09-29 06:17:29.639079 | orchestrator | 2025-09-29 06:17:29.639089 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-29 06:17:29.639100 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.059) 0:00:18.566 ****** 2025-09-29 06:17:29.639111 | orchestrator | 2025-09-29 06:17:29.639121 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-29 06:17:29.639132 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.058) 0:00:18.625 ****** 2025-09-29 06:17:29.639142 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:29.639153 | orchestrator | 2025-09-29 06:17:29.639164 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-29 06:17:29.639175 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.166) 0:00:18.791 ****** 2025-09-29 06:17:29.639185 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:29.639196 | orchestrator | 2025-09-29 06:17:29.639206 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-29 06:17:29.639217 | orchestrator | Monday 29 September 2025 06:14:49 +0000 (0:00:00.409) 0:00:19.201 ****** 2025-09-29 06:17:29.639227 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.639238 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:29.639249 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:29.639259 | orchestrator | 2025-09-29 06:17:29.639270 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-29 06:17:29.639280 | orchestrator | Monday 29 September 2025 06:15:56 +0000 (0:01:06.874) 0:01:26.075 ****** 2025-09-29 06:17:29.639291 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.639301 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:29.639312 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:29.639322 | orchestrator | 2025-09-29 06:17:29.639340 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-29 06:17:29.639351 | orchestrator | Monday 29 September 2025 06:17:14 +0000 (0:01:18.063) 0:02:44.138 ****** 2025-09-29 06:17:29.639362 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:29.639373 | orchestrator | 2025-09-29 06:17:29.639438 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-29 06:17:29.639451 | orchestrator | Monday 29 September 2025 06:17:15 +0000 (0:00:00.566) 0:02:44.704 ****** 2025-09-29 06:17:29.639461 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:29.639472 | orchestrator | 2025-09-29 06:17:29.639483 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-29 06:17:29.639493 | orchestrator | Monday 29 September 2025 06:17:18 +0000 (0:00:02.559) 0:02:47.264 ****** 2025-09-29 06:17:29.639507 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:29.639527 | orchestrator | 2025-09-29 06:17:29.639545 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-29 06:17:29.639564 | orchestrator | Monday 29 September 2025 06:17:20 +0000 (0:00:02.555) 0:02:49.820 ****** 2025-09-29 06:17:29.639582 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.639602 | orchestrator | 2025-09-29 06:17:29.639620 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-29 06:17:29.639637 | orchestrator | Monday 29 September 2025 06:17:23 +0000 (0:00:03.043) 0:02:52.864 ****** 2025-09-29 06:17:29.639653 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:29.639670 | orchestrator | 2025-09-29 06:17:29.639682 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:17:29.639693 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:17:29.639704 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:17:29.639713 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:17:29.639723 | orchestrator | 2025-09-29 06:17:29.639732 | orchestrator | 2025-09-29 06:17:29.639742 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:17:29.639758 | orchestrator | Monday 29 September 2025 06:17:26 +0000 (0:00:02.933) 0:02:55.797 ****** 2025-09-29 06:17:29.639768 | orchestrator | =============================================================================== 2025-09-29 06:17:29.639778 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 78.06s 2025-09-29 06:17:29.639787 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.87s 2025-09-29 06:17:29.639796 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.13s 2025-09-29 06:17:29.639806 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.04s 2025-09-29 06:17:29.639815 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.93s 2025-09-29 06:17:29.639824 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.56s 2025-09-29 06:17:29.639840 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.56s 2025-09-29 06:17:29.639850 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.47s 2025-09-29 06:17:29.639859 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.34s 2025-09-29 06:17:29.639868 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.33s 2025-09-29 06:17:29.639878 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.72s 2025-09-29 06:17:29.639887 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.62s 2025-09-29 06:17:29.639896 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.98s 2025-09-29 06:17:29.639915 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.96s 2025-09-29 06:17:29.639925 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.60s 2025-09-29 06:17:29.639934 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-09-29 06:17:29.639944 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-29 06:17:29.639953 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-09-29 06:17:29.639962 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-09-29 06:17:29.639972 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.41s 2025-09-29 06:17:29.639982 | orchestrator | 2025-09-29 06:17:29 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:29.639991 | orchestrator | 2025-09-29 06:17:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:32.671984 | orchestrator | 2025-09-29 06:17:32 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:32.673300 | orchestrator | 2025-09-29 06:17:32 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:32.673663 | orchestrator | 2025-09-29 06:17:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:35.722527 | orchestrator | 2025-09-29 06:17:35 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:35.723716 | orchestrator | 2025-09-29 06:17:35 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:35.723751 | orchestrator | 2025-09-29 06:17:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:38.768309 | orchestrator | 2025-09-29 06:17:38 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:38.770896 | orchestrator | 2025-09-29 06:17:38 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state STARTED 2025-09-29 06:17:38.770953 | orchestrator | 2025-09-29 06:17:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:41.824904 | orchestrator | 2025-09-29 06:17:41 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:41.825704 | orchestrator | 2025-09-29 06:17:41 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:41.828162 | orchestrator | 2025-09-29 06:17:41 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:41.830268 | orchestrator | 2025-09-29 06:17:41 | INFO  | Task 4443303e-3e6a-4c84-a939-dda9f75bd742 is in state SUCCESS 2025-09-29 06:17:41.831964 | orchestrator | 2025-09-29 06:17:41.832019 | orchestrator | 2025-09-29 06:17:41.832033 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-29 06:17:41.832045 | orchestrator | 2025-09-29 06:17:41.832057 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-29 06:17:41.832068 | orchestrator | Monday 29 September 2025 06:14:30 +0000 (0:00:00.102) 0:00:00.102 ****** 2025-09-29 06:17:41.832079 | orchestrator | ok: [localhost] => { 2025-09-29 06:17:41.832092 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-29 06:17:41.832104 | orchestrator | } 2025-09-29 06:17:41.832115 | orchestrator | 2025-09-29 06:17:41.832126 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-29 06:17:41.832137 | orchestrator | Monday 29 September 2025 06:14:31 +0000 (0:00:00.058) 0:00:00.160 ****** 2025-09-29 06:17:41.832148 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-29 06:17:41.832160 | orchestrator | ...ignoring 2025-09-29 06:17:41.832197 | orchestrator | 2025-09-29 06:17:41.832208 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-29 06:17:41.832219 | orchestrator | Monday 29 September 2025 06:14:33 +0000 (0:00:02.907) 0:00:03.068 ****** 2025-09-29 06:17:41.832229 | orchestrator | skipping: [localhost] 2025-09-29 06:17:41.832240 | orchestrator | 2025-09-29 06:17:41.832251 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-29 06:17:41.832261 | orchestrator | Monday 29 September 2025 06:14:34 +0000 (0:00:00.054) 0:00:03.123 ****** 2025-09-29 06:17:41.832272 | orchestrator | ok: [localhost] 2025-09-29 06:17:41.832283 | orchestrator | 2025-09-29 06:17:41.832293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:17:41.832304 | orchestrator | 2025-09-29 06:17:41.832327 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:17:41.832338 | orchestrator | Monday 29 September 2025 06:14:34 +0000 (0:00:00.152) 0:00:03.276 ****** 2025-09-29 06:17:41.832349 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.832415 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.832428 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.832438 | orchestrator | 2025-09-29 06:17:41.832474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:17:41.832485 | orchestrator | Monday 29 September 2025 06:14:34 +0000 (0:00:00.321) 0:00:03.597 ****** 2025-09-29 06:17:41.832496 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-29 06:17:41.832507 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-29 06:17:41.832518 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-29 06:17:41.832529 | orchestrator | 2025-09-29 06:17:41.832539 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-29 06:17:41.832552 | orchestrator | 2025-09-29 06:17:41.832565 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-29 06:17:41.832577 | orchestrator | Monday 29 September 2025 06:14:35 +0000 (0:00:00.522) 0:00:04.120 ****** 2025-09-29 06:17:41.832590 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:17:41.832602 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-29 06:17:41.832614 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-29 06:17:41.832626 | orchestrator | 2025-09-29 06:17:41.832639 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-29 06:17:41.832651 | orchestrator | Monday 29 September 2025 06:14:35 +0000 (0:00:00.447) 0:00:04.568 ****** 2025-09-29 06:17:41.832663 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:41.832677 | orchestrator | 2025-09-29 06:17:41.832689 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-29 06:17:41.832701 | orchestrator | Monday 29 September 2025 06:14:35 +0000 (0:00:00.474) 0:00:05.042 ****** 2025-09-29 06:17:41.832736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.832771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.832787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.832817 | orchestrator | 2025-09-29 06:17:41.832839 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-29 06:17:41.832852 | orchestrator | Monday 29 September 2025 06:14:38 +0000 (0:00:02.844) 0:00:07.887 ****** 2025-09-29 06:17:41.832865 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.832878 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.832891 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.832903 | orchestrator | 2025-09-29 06:17:41.832916 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-29 06:17:41.832926 | orchestrator | Monday 29 September 2025 06:14:39 +0000 (0:00:00.638) 0:00:08.525 ****** 2025-09-29 06:17:41.832937 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.832948 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.832959 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.832970 | orchestrator | 2025-09-29 06:17:41.832980 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-29 06:17:41.832991 | orchestrator | Monday 29 September 2025 06:14:41 +0000 (0:00:01.677) 0:00:10.203 ****** 2025-09-29 06:17:41.833008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833065 | orchestrator | 2025-09-29 06:17:41.833076 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-29 06:17:41.833087 | orchestrator | Monday 29 September 2025 06:14:44 +0000 (0:00:03.738) 0:00:13.942 ****** 2025-09-29 06:17:41.833097 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.833108 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.833119 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.833130 | orchestrator | 2025-09-29 06:17:41.833141 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-29 06:17:41.833151 | orchestrator | Monday 29 September 2025 06:14:45 +0000 (0:00:01.127) 0:00:15.070 ****** 2025-09-29 06:17:41.833162 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.833173 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:41.833183 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:41.833194 | orchestrator | 2025-09-29 06:17:41.833205 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-29 06:17:41.833216 | orchestrator | Monday 29 September 2025 06:14:50 +0000 (0:00:04.499) 0:00:19.569 ****** 2025-09-29 06:17:41.833235 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:41.833246 | orchestrator | 2025-09-29 06:17:41.833257 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-29 06:17:41.833267 | orchestrator | Monday 29 September 2025 06:14:51 +0000 (0:00:00.604) 0:00:20.174 ****** 2025-09-29 06:17:41.833288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833301 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.833324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833343 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.833381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833395 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.833407 | orchestrator | 2025-09-29 06:17:41.833417 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-29 06:17:41.833428 | orchestrator | Monday 29 September 2025 06:14:54 +0000 (0:00:03.613) 0:00:23.788 ****** 2025-09-29 06:17:41.833445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833463 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.833481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833494 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.833510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833522 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.833533 | orchestrator | 2025-09-29 06:17:41.833544 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-29 06:17:41.833562 | orchestrator | Monday 29 September 2025 06:14:57 +0000 (0:00:02.736) 0:00:26.525 ****** 2025-09-29 06:17:41.833573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833586 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.833611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833624 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.833635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-29 06:17:41.833655 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.833666 | orchestrator | 2025-09-29 06:17:41.833677 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-29 06:17:41.833688 | orchestrator | Monday 29 September 2025 06:15:00 +0000 (0:00:03.045) 0:00:29.570 ****** 2025-09-29 06:17:41.833712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-29 06:17:41.833767 | orchestrator | 2025-09-29 06:17:41.833778 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-29 06:17:41.833789 | orchestrator | Monday 29 September 2025 06:15:04 +0000 (0:00:04.089) 0:00:33.660 ****** 2025-09-29 06:17:41.833800 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.833815 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:41.833826 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:41.833837 | orchestrator | 2025-09-29 06:17:41.833848 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-29 06:17:41.833859 | orchestrator | Monday 29 September 2025 06:15:05 +0000 (0:00:00.850) 0:00:34.511 ****** 2025-09-29 06:17:41.833876 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.833887 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.833898 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.833908 | orchestrator | 2025-09-29 06:17:41.833919 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-29 06:17:41.833930 | orchestrator | Monday 29 September 2025 06:15:05 +0000 (0:00:00.405) 0:00:34.917 ****** 2025-09-29 06:17:41.833941 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.833952 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.833962 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.833973 | orchestrator | 2025-09-29 06:17:41.833984 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-29 06:17:41.833995 | orchestrator | Monday 29 September 2025 06:15:06 +0000 (0:00:00.326) 0:00:35.243 ****** 2025-09-29 06:17:41.834007 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-29 06:17:41.834077 | orchestrator | ...ignoring 2025-09-29 06:17:41.834090 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-29 06:17:41.834101 | orchestrator | ...ignoring 2025-09-29 06:17:41.834112 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-29 06:17:41.834122 | orchestrator | ...ignoring 2025-09-29 06:17:41.834133 | orchestrator | 2025-09-29 06:17:41.834144 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-29 06:17:41.834155 | orchestrator | Monday 29 September 2025 06:15:17 +0000 (0:00:10.898) 0:00:46.141 ****** 2025-09-29 06:17:41.834165 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.834176 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.834186 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.834197 | orchestrator | 2025-09-29 06:17:41.834208 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-29 06:17:41.834219 | orchestrator | Monday 29 September 2025 06:15:17 +0000 (0:00:00.370) 0:00:46.511 ****** 2025-09-29 06:17:41.834229 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.834240 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834250 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834261 | orchestrator | 2025-09-29 06:17:41.834272 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-29 06:17:41.834282 | orchestrator | Monday 29 September 2025 06:15:17 +0000 (0:00:00.546) 0:00:47.058 ****** 2025-09-29 06:17:41.834293 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.834304 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834314 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834325 | orchestrator | 2025-09-29 06:17:41.834335 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-29 06:17:41.834346 | orchestrator | Monday 29 September 2025 06:15:18 +0000 (0:00:00.380) 0:00:47.438 ****** 2025-09-29 06:17:41.834356 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.834415 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834427 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834438 | orchestrator | 2025-09-29 06:17:41.834449 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-29 06:17:41.834460 | orchestrator | Monday 29 September 2025 06:15:18 +0000 (0:00:00.337) 0:00:47.776 ****** 2025-09-29 06:17:41.834470 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.834481 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.834492 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.834505 | orchestrator | 2025-09-29 06:17:41.834524 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-29 06:17:41.834542 | orchestrator | Monday 29 September 2025 06:15:19 +0000 (0:00:00.434) 0:00:48.211 ****** 2025-09-29 06:17:41.834582 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.834601 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834619 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834632 | orchestrator | 2025-09-29 06:17:41.834643 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-29 06:17:41.834654 | orchestrator | Monday 29 September 2025 06:15:19 +0000 (0:00:00.505) 0:00:48.716 ****** 2025-09-29 06:17:41.834664 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834675 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834686 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-29 06:17:41.834697 | orchestrator | 2025-09-29 06:17:41.834708 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-29 06:17:41.834718 | orchestrator | Monday 29 September 2025 06:15:19 +0000 (0:00:00.320) 0:00:49.037 ****** 2025-09-29 06:17:41.834729 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.834740 | orchestrator | 2025-09-29 06:17:41.834750 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-29 06:17:41.834761 | orchestrator | Monday 29 September 2025 06:15:30 +0000 (0:00:10.319) 0:00:59.356 ****** 2025-09-29 06:17:41.834772 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.834782 | orchestrator | 2025-09-29 06:17:41.834793 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-29 06:17:41.834804 | orchestrator | Monday 29 September 2025 06:15:30 +0000 (0:00:00.115) 0:00:59.472 ****** 2025-09-29 06:17:41.834814 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.834825 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.834836 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.834846 | orchestrator | 2025-09-29 06:17:41.834857 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-29 06:17:41.834868 | orchestrator | Monday 29 September 2025 06:15:31 +0000 (0:00:00.836) 0:01:00.309 ****** 2025-09-29 06:17:41.834885 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.834896 | orchestrator | 2025-09-29 06:17:41.834907 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-29 06:17:41.834917 | orchestrator | Monday 29 September 2025 06:15:38 +0000 (0:00:07.660) 0:01:07.969 ****** 2025-09-29 06:17:41.834928 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.834939 | orchestrator | 2025-09-29 06:17:41.834949 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-29 06:17:41.834960 | orchestrator | Monday 29 September 2025 06:15:40 +0000 (0:00:01.735) 0:01:09.705 ****** 2025-09-29 06:17:41.834970 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.834981 | orchestrator | 2025-09-29 06:17:41.834992 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-29 06:17:41.835002 | orchestrator | Monday 29 September 2025 06:15:43 +0000 (0:00:02.489) 0:01:12.194 ****** 2025-09-29 06:17:41.835013 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.835023 | orchestrator | 2025-09-29 06:17:41.835034 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-29 06:17:41.835045 | orchestrator | Monday 29 September 2025 06:15:43 +0000 (0:00:00.130) 0:01:12.325 ****** 2025-09-29 06:17:41.835055 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.835066 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.835077 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.835087 | orchestrator | 2025-09-29 06:17:41.835098 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-29 06:17:41.835108 | orchestrator | Monday 29 September 2025 06:15:43 +0000 (0:00:00.302) 0:01:12.628 ****** 2025-09-29 06:17:41.835119 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.835130 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-29 06:17:41.835140 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:41.835150 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:41.835161 | orchestrator | 2025-09-29 06:17:41.835179 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-29 06:17:41.835189 | orchestrator | skipping: no hosts matched 2025-09-29 06:17:41.835200 | orchestrator | 2025-09-29 06:17:41.835211 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-29 06:17:41.835221 | orchestrator | 2025-09-29 06:17:41.835232 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-29 06:17:41.835243 | orchestrator | Monday 29 September 2025 06:15:44 +0000 (0:00:00.568) 0:01:13.196 ****** 2025-09-29 06:17:41.835253 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:17:41.835264 | orchestrator | 2025-09-29 06:17:41.835275 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-29 06:17:41.835285 | orchestrator | Monday 29 September 2025 06:16:00 +0000 (0:00:16.408) 0:01:29.604 ****** 2025-09-29 06:17:41.835296 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Wait for MariaDB service port liveness (10 retries left). 2025-09-29 06:17:41.835307 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.835318 | orchestrator | 2025-09-29 06:17:41.835329 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-29 06:17:41.835339 | orchestrator | Monday 29 September 2025 06:16:21 +0000 (0:00:20.969) 0:01:50.574 ****** 2025-09-29 06:17:41.835350 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.835378 | orchestrator | 2025-09-29 06:17:41.835390 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-29 06:17:41.835401 | orchestrator | 2025-09-29 06:17:41.835411 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-29 06:17:41.835422 | orchestrator | Monday 29 September 2025 06:16:23 +0000 (0:00:02.389) 0:01:52.963 ****** 2025-09-29 06:17:41.835433 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:17:41.835443 | orchestrator | 2025-09-29 06:17:41.835454 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-29 06:17:41.835465 | orchestrator | Monday 29 September 2025 06:16:41 +0000 (0:00:17.609) 0:02:10.573 ****** 2025-09-29 06:17:41.835475 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.835486 | orchestrator | 2025-09-29 06:17:41.835497 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-29 06:17:41.835507 | orchestrator | Monday 29 September 2025 06:17:03 +0000 (0:00:21.558) 0:02:32.132 ****** 2025-09-29 06:17:41.835524 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.835535 | orchestrator | 2025-09-29 06:17:41.835546 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-29 06:17:41.835557 | orchestrator | 2025-09-29 06:17:41.835568 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-29 06:17:41.835579 | orchestrator | Monday 29 September 2025 06:17:05 +0000 (0:00:02.517) 0:02:34.649 ****** 2025-09-29 06:17:41.835589 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.835600 | orchestrator | 2025-09-29 06:17:41.835611 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-29 06:17:41.835621 | orchestrator | Monday 29 September 2025 06:17:17 +0000 (0:00:11.711) 0:02:46.360 ****** 2025-09-29 06:17:41.835632 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.835642 | orchestrator | 2025-09-29 06:17:41.835653 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-29 06:17:41.835664 | orchestrator | Monday 29 September 2025 06:17:22 +0000 (0:00:05.591) 0:02:51.952 ****** 2025-09-29 06:17:41.835675 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.835685 | orchestrator | 2025-09-29 06:17:41.835696 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-29 06:17:41.835706 | orchestrator | 2025-09-29 06:17:41.835717 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-29 06:17:41.835728 | orchestrator | Monday 29 September 2025 06:17:25 +0000 (0:00:02.884) 0:02:54.836 ****** 2025-09-29 06:17:41.835738 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:17:41.835749 | orchestrator | 2025-09-29 06:17:41.835760 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-29 06:17:41.835781 | orchestrator | Monday 29 September 2025 06:17:26 +0000 (0:00:00.618) 0:02:55.454 ****** 2025-09-29 06:17:41.835792 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.835802 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.835818 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.835829 | orchestrator | 2025-09-29 06:17:41.835840 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-29 06:17:41.835851 | orchestrator | Monday 29 September 2025 06:17:28 +0000 (0:00:02.594) 0:02:58.048 ****** 2025-09-29 06:17:41.835862 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.835872 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.835883 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.835894 | orchestrator | 2025-09-29 06:17:41.835905 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-29 06:17:41.835915 | orchestrator | Monday 29 September 2025 06:17:31 +0000 (0:00:02.507) 0:03:00.556 ****** 2025-09-29 06:17:41.835926 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.835937 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.835947 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.835958 | orchestrator | 2025-09-29 06:17:41.835968 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-29 06:17:41.835979 | orchestrator | Monday 29 September 2025 06:17:33 +0000 (0:00:02.390) 0:03:02.947 ****** 2025-09-29 06:17:41.835990 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.836001 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.836011 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:17:41.836022 | orchestrator | 2025-09-29 06:17:41.836033 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-29 06:17:41.836043 | orchestrator | Monday 29 September 2025 06:17:36 +0000 (0:00:02.326) 0:03:05.273 ****** 2025-09-29 06:17:41.836054 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:17:41.836065 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:17:41.836075 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:17:41.836086 | orchestrator | 2025-09-29 06:17:41.836097 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-29 06:17:41.836108 | orchestrator | Monday 29 September 2025 06:17:39 +0000 (0:00:02.848) 0:03:08.121 ****** 2025-09-29 06:17:41.836118 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:17:41.836129 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:17:41.836139 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:17:41.836150 | orchestrator | 2025-09-29 06:17:41.836161 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:17:41.836172 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-29 06:17:41.836183 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-29 06:17:41.836195 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-29 06:17:41.836206 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-29 06:17:41.836216 | orchestrator | 2025-09-29 06:17:41.836227 | orchestrator | 2025-09-29 06:17:41.836238 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:17:41.836249 | orchestrator | Monday 29 September 2025 06:17:39 +0000 (0:00:00.215) 0:03:08.337 ****** 2025-09-29 06:17:41.836259 | orchestrator | =============================================================================== 2025-09-29 06:17:41.836270 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 42.53s 2025-09-29 06:17:41.836281 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.02s 2025-09-29 06:17:41.836297 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.71s 2025-09-29 06:17:41.836308 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.90s 2025-09-29 06:17:41.836318 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.32s 2025-09-29 06:17:41.836334 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.66s 2025-09-29 06:17:41.836345 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.59s 2025-09-29 06:17:41.836356 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.91s 2025-09-29 06:17:41.836383 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.50s 2025-09-29 06:17:41.836394 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.09s 2025-09-29 06:17:41.836405 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.74s 2025-09-29 06:17:41.836416 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.61s 2025-09-29 06:17:41.836426 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.05s 2025-09-29 06:17:41.836437 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2025-09-29 06:17:41.836447 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.88s 2025-09-29 06:17:41.836458 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.85s 2025-09-29 06:17:41.836468 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.84s 2025-09-29 06:17:41.836479 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.74s 2025-09-29 06:17:41.836490 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.59s 2025-09-29 06:17:41.836500 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.51s 2025-09-29 06:17:41.836511 | orchestrator | 2025-09-29 06:17:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:44.856799 | orchestrator | 2025-09-29 06:17:44 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:44.857392 | orchestrator | 2025-09-29 06:17:44 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:44.858239 | orchestrator | 2025-09-29 06:17:44 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:44.858784 | orchestrator | 2025-09-29 06:17:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:47.899238 | orchestrator | 2025-09-29 06:17:47 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:47.901248 | orchestrator | 2025-09-29 06:17:47 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:47.903600 | orchestrator | 2025-09-29 06:17:47 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:47.903764 | orchestrator | 2025-09-29 06:17:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:50.952167 | orchestrator | 2025-09-29 06:17:50 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:50.952265 | orchestrator | 2025-09-29 06:17:50 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:50.953056 | orchestrator | 2025-09-29 06:17:50 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:50.953082 | orchestrator | 2025-09-29 06:17:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:53.977283 | orchestrator | 2025-09-29 06:17:53 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:53.977478 | orchestrator | 2025-09-29 06:17:53 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:53.977537 | orchestrator | 2025-09-29 06:17:53 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:53.977781 | orchestrator | 2025-09-29 06:17:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:17:57.016170 | orchestrator | 2025-09-29 06:17:57 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:17:57.016259 | orchestrator | 2025-09-29 06:17:57 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:17:57.016710 | orchestrator | 2025-09-29 06:17:57 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:17:57.016864 | orchestrator | 2025-09-29 06:17:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:00.051244 | orchestrator | 2025-09-29 06:18:00 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:00.055214 | orchestrator | 2025-09-29 06:18:00 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:00.056863 | orchestrator | 2025-09-29 06:18:00 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:00.056896 | orchestrator | 2025-09-29 06:18:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:03.097138 | orchestrator | 2025-09-29 06:18:03 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:03.101458 | orchestrator | 2025-09-29 06:18:03 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:03.104371 | orchestrator | 2025-09-29 06:18:03 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:03.104488 | orchestrator | 2025-09-29 06:18:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:06.140870 | orchestrator | 2025-09-29 06:18:06 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:06.144367 | orchestrator | 2025-09-29 06:18:06 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:06.146589 | orchestrator | 2025-09-29 06:18:06 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:06.146642 | orchestrator | 2025-09-29 06:18:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:09.196165 | orchestrator | 2025-09-29 06:18:09 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:09.197687 | orchestrator | 2025-09-29 06:18:09 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:09.199088 | orchestrator | 2025-09-29 06:18:09 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:09.199133 | orchestrator | 2025-09-29 06:18:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:12.244076 | orchestrator | 2025-09-29 06:18:12 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:12.245445 | orchestrator | 2025-09-29 06:18:12 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:12.246969 | orchestrator | 2025-09-29 06:18:12 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:12.247025 | orchestrator | 2025-09-29 06:18:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:15.284190 | orchestrator | 2025-09-29 06:18:15 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:15.285999 | orchestrator | 2025-09-29 06:18:15 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:15.289948 | orchestrator | 2025-09-29 06:18:15 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:15.289985 | orchestrator | 2025-09-29 06:18:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:18.328928 | orchestrator | 2025-09-29 06:18:18 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:18.330543 | orchestrator | 2025-09-29 06:18:18 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:18.331978 | orchestrator | 2025-09-29 06:18:18 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:18.332155 | orchestrator | 2025-09-29 06:18:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:21.382184 | orchestrator | 2025-09-29 06:18:21 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:21.383258 | orchestrator | 2025-09-29 06:18:21 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:21.385021 | orchestrator | 2025-09-29 06:18:21 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:21.385060 | orchestrator | 2025-09-29 06:18:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:24.426643 | orchestrator | 2025-09-29 06:18:24 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:24.427783 | orchestrator | 2025-09-29 06:18:24 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:24.429737 | orchestrator | 2025-09-29 06:18:24 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:24.429831 | orchestrator | 2025-09-29 06:18:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:27.476247 | orchestrator | 2025-09-29 06:18:27 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:27.476889 | orchestrator | 2025-09-29 06:18:27 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:27.479994 | orchestrator | 2025-09-29 06:18:27 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:27.480035 | orchestrator | 2025-09-29 06:18:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:30.521501 | orchestrator | 2025-09-29 06:18:30 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:30.522760 | orchestrator | 2025-09-29 06:18:30 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:30.524578 | orchestrator | 2025-09-29 06:18:30 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:30.524638 | orchestrator | 2025-09-29 06:18:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:33.560189 | orchestrator | 2025-09-29 06:18:33 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:33.560772 | orchestrator | 2025-09-29 06:18:33 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:33.561762 | orchestrator | 2025-09-29 06:18:33 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:33.561778 | orchestrator | 2025-09-29 06:18:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:36.604470 | orchestrator | 2025-09-29 06:18:36 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:36.606121 | orchestrator | 2025-09-29 06:18:36 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:36.606841 | orchestrator | 2025-09-29 06:18:36 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:36.606918 | orchestrator | 2025-09-29 06:18:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:39.643907 | orchestrator | 2025-09-29 06:18:39 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:39.645818 | orchestrator | 2025-09-29 06:18:39 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:39.649238 | orchestrator | 2025-09-29 06:18:39 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:39.649384 | orchestrator | 2025-09-29 06:18:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:42.702318 | orchestrator | 2025-09-29 06:18:42 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:42.704179 | orchestrator | 2025-09-29 06:18:42 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:42.706987 | orchestrator | 2025-09-29 06:18:42 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:42.707053 | orchestrator | 2025-09-29 06:18:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:45.753021 | orchestrator | 2025-09-29 06:18:45 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:45.754740 | orchestrator | 2025-09-29 06:18:45 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:45.756782 | orchestrator | 2025-09-29 06:18:45 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:45.756804 | orchestrator | 2025-09-29 06:18:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:48.808767 | orchestrator | 2025-09-29 06:18:48 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:48.810547 | orchestrator | 2025-09-29 06:18:48 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:48.813376 | orchestrator | 2025-09-29 06:18:48 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:48.813422 | orchestrator | 2025-09-29 06:18:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:51.859493 | orchestrator | 2025-09-29 06:18:51 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state STARTED 2025-09-29 06:18:51.860901 | orchestrator | 2025-09-29 06:18:51 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:51.862593 | orchestrator | 2025-09-29 06:18:51 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:51.862687 | orchestrator | 2025-09-29 06:18:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:54.913123 | orchestrator | 2025-09-29 06:18:54.913211 | orchestrator | 2025-09-29 06:18:54.913222 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-29 06:18:54.913230 | orchestrator | 2025-09-29 06:18:54.913237 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-29 06:18:54.913243 | orchestrator | Monday 29 September 2025 06:16:42 +0000 (0:00:00.560) 0:00:00.560 ****** 2025-09-29 06:18:54.913304 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:18:54.913312 | orchestrator | 2025-09-29 06:18:54.913319 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-29 06:18:54.913326 | orchestrator | Monday 29 September 2025 06:16:43 +0000 (0:00:00.519) 0:00:01.079 ****** 2025-09-29 06:18:54.913332 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913340 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913346 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913351 | orchestrator | 2025-09-29 06:18:54.913358 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-29 06:18:54.913385 | orchestrator | Monday 29 September 2025 06:16:43 +0000 (0:00:00.587) 0:00:01.667 ****** 2025-09-29 06:18:54.913392 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913398 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913403 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913409 | orchestrator | 2025-09-29 06:18:54.913415 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-29 06:18:54.913422 | orchestrator | Monday 29 September 2025 06:16:44 +0000 (0:00:00.246) 0:00:01.913 ****** 2025-09-29 06:18:54.913428 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913434 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913441 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913447 | orchestrator | 2025-09-29 06:18:54.913453 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-29 06:18:54.913460 | orchestrator | Monday 29 September 2025 06:16:44 +0000 (0:00:00.714) 0:00:02.627 ****** 2025-09-29 06:18:54.913466 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913472 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913478 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913484 | orchestrator | 2025-09-29 06:18:54.913491 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-29 06:18:54.913497 | orchestrator | Monday 29 September 2025 06:16:45 +0000 (0:00:00.304) 0:00:02.932 ****** 2025-09-29 06:18:54.913503 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913509 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913516 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913522 | orchestrator | 2025-09-29 06:18:54.913528 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-29 06:18:54.913549 | orchestrator | Monday 29 September 2025 06:16:45 +0000 (0:00:00.284) 0:00:03.217 ****** 2025-09-29 06:18:54.913555 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913561 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913567 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913573 | orchestrator | 2025-09-29 06:18:54.913580 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-29 06:18:54.913586 | orchestrator | Monday 29 September 2025 06:16:45 +0000 (0:00:00.291) 0:00:03.509 ****** 2025-09-29 06:18:54.913594 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.913601 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.913708 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.913716 | orchestrator | 2025-09-29 06:18:54.913723 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-29 06:18:54.913729 | orchestrator | Monday 29 September 2025 06:16:46 +0000 (0:00:00.469) 0:00:03.978 ****** 2025-09-29 06:18:54.913785 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913793 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913800 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913807 | orchestrator | 2025-09-29 06:18:54.913814 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-29 06:18:54.913821 | orchestrator | Monday 29 September 2025 06:16:46 +0000 (0:00:00.318) 0:00:04.296 ****** 2025-09-29 06:18:54.913828 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-29 06:18:54.913836 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:18:54.913843 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:18:54.913849 | orchestrator | 2025-09-29 06:18:54.913857 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-29 06:18:54.913864 | orchestrator | Monday 29 September 2025 06:16:47 +0000 (0:00:00.615) 0:00:04.912 ****** 2025-09-29 06:18:54.913871 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.913879 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.913885 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.913892 | orchestrator | 2025-09-29 06:18:54.913899 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-29 06:18:54.913915 | orchestrator | Monday 29 September 2025 06:16:47 +0000 (0:00:00.426) 0:00:05.338 ****** 2025-09-29 06:18:54.913922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-29 06:18:54.913929 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:18:54.913936 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:18:54.913942 | orchestrator | 2025-09-29 06:18:54.913950 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-29 06:18:54.913957 | orchestrator | Monday 29 September 2025 06:16:49 +0000 (0:00:02.057) 0:00:07.396 ****** 2025-09-29 06:18:54.913963 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-29 06:18:54.913971 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-29 06:18:54.913978 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-29 06:18:54.913985 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.913992 | orchestrator | 2025-09-29 06:18:54.913999 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-29 06:18:54.914066 | orchestrator | Monday 29 September 2025 06:16:50 +0000 (0:00:00.352) 0:00:07.749 ****** 2025-09-29 06:18:54.914078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914094 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914101 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914485 | orchestrator | 2025-09-29 06:18:54.914505 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-29 06:18:54.914513 | orchestrator | Monday 29 September 2025 06:16:50 +0000 (0:00:00.693) 0:00:08.443 ****** 2025-09-29 06:18:54.914522 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.914552 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914559 | orchestrator | 2025-09-29 06:18:54.914566 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-29 06:18:54.914573 | orchestrator | Monday 29 September 2025 06:16:50 +0000 (0:00:00.141) 0:00:08.584 ****** 2025-09-29 06:18:54.914589 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '36a6b07538c8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-29 06:16:48.293889', 'end': '2025-09-29 06:16:48.341926', 'delta': '0:00:00.048037', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['36a6b07538c8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-29 06:18:54.914599 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0341792094c2', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-29 06:16:49.035261', 'end': '2025-09-29 06:16:49.072836', 'delta': '0:00:00.037575', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0341792094c2'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-29 06:18:54.914632 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2f807e810c6d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-29 06:16:49.565714', 'end': '2025-09-29 06:16:49.602870', 'delta': '0:00:00.037156', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2f807e810c6d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-29 06:18:54.914639 | orchestrator | 2025-09-29 06:18:54.914645 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-29 06:18:54.914651 | orchestrator | Monday 29 September 2025 06:16:51 +0000 (0:00:00.296) 0:00:08.881 ****** 2025-09-29 06:18:54.914657 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.914664 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.914670 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.914676 | orchestrator | 2025-09-29 06:18:54.914683 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-29 06:18:54.914689 | orchestrator | Monday 29 September 2025 06:16:51 +0000 (0:00:00.372) 0:00:09.253 ****** 2025-09-29 06:18:54.914695 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-29 06:18:54.914701 | orchestrator | 2025-09-29 06:18:54.914707 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-29 06:18:54.914713 | orchestrator | Monday 29 September 2025 06:16:53 +0000 (0:00:01.632) 0:00:10.885 ****** 2025-09-29 06:18:54.914719 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914725 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914730 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914734 | orchestrator | 2025-09-29 06:18:54.914738 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-29 06:18:54.914744 | orchestrator | Monday 29 September 2025 06:16:53 +0000 (0:00:00.261) 0:00:11.146 ****** 2025-09-29 06:18:54.914750 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914755 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914761 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914773 | orchestrator | 2025-09-29 06:18:54.914779 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-29 06:18:54.914789 | orchestrator | Monday 29 September 2025 06:16:53 +0000 (0:00:00.341) 0:00:11.488 ****** 2025-09-29 06:18:54.914795 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914801 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914807 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914812 | orchestrator | 2025-09-29 06:18:54.914818 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-29 06:18:54.914842 | orchestrator | Monday 29 September 2025 06:16:54 +0000 (0:00:00.358) 0:00:11.846 ****** 2025-09-29 06:18:54.914849 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.914855 | orchestrator | 2025-09-29 06:18:54.914861 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-29 06:18:54.914868 | orchestrator | Monday 29 September 2025 06:16:54 +0000 (0:00:00.126) 0:00:11.972 ****** 2025-09-29 06:18:54.914874 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914880 | orchestrator | 2025-09-29 06:18:54.914886 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-29 06:18:54.914893 | orchestrator | Monday 29 September 2025 06:16:54 +0000 (0:00:00.199) 0:00:12.172 ****** 2025-09-29 06:18:54.914899 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914905 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914912 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914918 | orchestrator | 2025-09-29 06:18:54.914924 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-29 06:18:54.914930 | orchestrator | Monday 29 September 2025 06:16:54 +0000 (0:00:00.239) 0:00:12.412 ****** 2025-09-29 06:18:54.914937 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914943 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914949 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914955 | orchestrator | 2025-09-29 06:18:54.914962 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-29 06:18:54.914968 | orchestrator | Monday 29 September 2025 06:16:55 +0000 (0:00:00.278) 0:00:12.690 ****** 2025-09-29 06:18:54.914974 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.914980 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.914986 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.914993 | orchestrator | 2025-09-29 06:18:54.914998 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-29 06:18:54.915004 | orchestrator | Monday 29 September 2025 06:16:55 +0000 (0:00:00.376) 0:00:13.066 ****** 2025-09-29 06:18:54.915011 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915018 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915041 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.915045 | orchestrator | 2025-09-29 06:18:54.915049 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-29 06:18:54.915053 | orchestrator | Monday 29 September 2025 06:16:55 +0000 (0:00:00.280) 0:00:13.347 ****** 2025-09-29 06:18:54.915057 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915060 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915064 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.915067 | orchestrator | 2025-09-29 06:18:54.915071 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-29 06:18:54.915075 | orchestrator | Monday 29 September 2025 06:16:55 +0000 (0:00:00.281) 0:00:13.629 ****** 2025-09-29 06:18:54.915078 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915082 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915088 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.915095 | orchestrator | 2025-09-29 06:18:54.915102 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-29 06:18:54.915137 | orchestrator | Monday 29 September 2025 06:16:56 +0000 (0:00:00.287) 0:00:13.916 ****** 2025-09-29 06:18:54.915144 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915157 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915163 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.915169 | orchestrator | 2025-09-29 06:18:54.915176 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-29 06:18:54.915182 | orchestrator | Monday 29 September 2025 06:16:56 +0000 (0:00:00.389) 0:00:14.306 ****** 2025-09-29 06:18:54.915191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78', 'dm-uuid-LVM-d8NZKwy7ftTse94wxkQnua72TKxupiytuYe05Wity3i14Qhl4VROCqD6knnOpqAB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176', 'dm-uuid-LVM-gbt4G8bLFnvTRoMGrRQv1WI2eQvndVYhJFVCvKStPk7a3I2lDgG5CRAphg1emVFQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915237 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a', 'dm-uuid-LVM-4wYSlljS0T5isP1TsPE4NfyE6gf8XLP3gnnp7iCeVETjfDMSauD4FYQuBuhttzAd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca', 'dm-uuid-LVM-aHmAI4mSI4GUFXsGTUitW9CtCm0Sokn4urRKMuHj22aPNrbz6y4iTC4qHk29xAcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tXc4sV-CyJx-xHZf-oWbW-W8Ro-lx7X-V1kqk3', 'scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc', 'scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MIMPw5-trda-akOu-1E4D-MbC0-mKzE-Ri7y2c', 'scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f', 'scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a', 'scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915390 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915409 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ed0ddS-DppI-QaOd-7IaL-3t1j-CG8t-ctGImb', 'scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495', 'scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hev04r-oN11-kdP7-DYe0-VScV-6gkx-btEQdm', 'scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee', 'scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60', 'scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915461 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915475 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910', 'dm-uuid-LVM-s0RZBQmqqycgxl7e1JyPQJ20o6pfPZupBQeyEBzW1QjSysrvihySRmw78rfZYOSC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0', 'dm-uuid-LVM-wNcPumkRip1ZOpXlItEaf9IOEdsSKVCe0LasbViKWzx55fVH1GrLseZl3obgMGl5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sect2025-09-29 06:18:54 | INFO  | Task c572f23d-887e-4ca5-8e55-8291cf0e4ad1 is in state SUCCESS 2025-09-29 06:18:54.915503 | orchestrator | orsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-29 06:18:54.915582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pURzBD-dWYd-GrBi-KdcW-a30h-oPrL-2UTKtr', 'scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1', 'scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LE2qnG-mk7t-bolv-6CtS-Ai8F-43K4-bf1ZWy', 'scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330', 'scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5', 'scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-29 06:18:54.915707 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.915715 | orchestrator | 2025-09-29 06:18:54.915719 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-29 06:18:54.915723 | orchestrator | Monday 29 September 2025 06:16:57 +0000 (0:00:00.551) 0:00:14.858 ****** 2025-09-29 06:18:54.915727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78', 'dm-uuid-LVM-d8NZKwy7ftTse94wxkQnua72TKxupiytuYe05Wity3i14Qhl4VROCqD6knnOpqAB'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176', 'dm-uuid-LVM-gbt4G8bLFnvTRoMGrRQv1WI2eQvndVYhJFVCvKStPk7a3I2lDgG5CRAphg1emVFQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915743 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a', 'dm-uuid-LVM-4wYSlljS0T5isP1TsPE4NfyE6gf8XLP3gnnp7iCeVETjfDMSauD4FYQuBuhttzAd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915814 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca', 'dm-uuid-LVM-aHmAI4mSI4GUFXsGTUitW9CtCm0Sokn4urRKMuHj22aPNrbz6y4iTC4qHk29xAcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915825 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915833 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915838 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8960639b-518d-4917-8774-29b1873047c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915858 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--da34c784--00a3--5dad--8c50--6eedba006e78-osd--block--da34c784--00a3--5dad--8c50--6eedba006e78'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tXc4sV-CyJx-xHZf-oWbW-W8Ro-lx7X-V1kqk3', 'scsi-0QEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc', 'scsi-SQEMU_QEMU_HARDDISK_47886bdb-eb57-4895-bb6c-095bf009f1bc'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915873 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5b44ac90--f026--5081--896e--3232400f6176-osd--block--5b44ac90--f026--5081--896e--3232400f6176'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MIMPw5-trda-akOu-1E4D-MbC0-mKzE-Ri7y2c', 'scsi-0QEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f', 'scsi-SQEMU_QEMU_HARDDISK_5f30f287-1956-4b14-b1b3-d656c5604e8f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915885 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a', 'scsi-SQEMU_QEMU_HARDDISK_6f7dc170-46a8-451b-ba46-45ea4054a55a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915898 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915906 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.915912 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915919 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16', 'scsi-SQEMU_QEMU_HARDDISK_3086d38e-d295-49b8-8314-7ddf42b6d254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915935 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--34f4ec66--7b15--5133--bf2a--17bf3a27b54a-osd--block--34f4ec66--7b15--5133--bf2a--17bf3a27b54a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Ed0ddS-DppI-QaOd-7IaL-3t1j-CG8t-ctGImb', 'scsi-0QEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495', 'scsi-SQEMU_QEMU_HARDDISK_9d6ffe74-7843-4b92-a660-34a8dc91d495'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--46f249ea--6148--566c--bc01--762c6d5847ca-osd--block--46f249ea--6148--566c--bc01--762c6d5847ca'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hev04r-oN11-kdP7-DYe0-VScV-6gkx-btEQdm', 'scsi-0QEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee', 'scsi-SQEMU_QEMU_HARDDISK_975b133b-dd90-41fb-addf-6e21202a98ee'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915949 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60', 'scsi-SQEMU_QEMU_HARDDISK_a26f0dd0-3def-45cb-a526-391b85857c60'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910', 'dm-uuid-LVM-s0RZBQmqqycgxl7e1JyPQJ20o6pfPZupBQeyEBzW1QjSysrvihySRmw78rfZYOSC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915966 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915972 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.915981 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0', 'dm-uuid-LVM-wNcPumkRip1ZOpXlItEaf9IOEdsSKVCe0LasbViKWzx55fVH1GrLseZl3obgMGl5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.915998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916028 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916058 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16', 'scsi-SQEMU_QEMU_HARDDISK_8cd16bf8-25b0-486d-8255-2bac14d23493-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916069 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6be24fb8--e256--5721--a6a2--6a7f57bf9910-osd--block--6be24fb8--e256--5721--a6a2--6a7f57bf9910'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pURzBD-dWYd-GrBi-KdcW-a30h-oPrL-2UTKtr', 'scsi-0QEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1', 'scsi-SQEMU_QEMU_HARDDISK_212523ac-09f9-4a75-841f-e4e8427949d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916080 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ed2553fc--8d98--5289--a275--720d5101f8b0-osd--block--ed2553fc--8d98--5289--a275--720d5101f8b0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LE2qnG-mk7t-bolv-6CtS-Ai8F-43K4-bf1ZWy', 'scsi-0QEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330', 'scsi-SQEMU_QEMU_HARDDISK_a19be117-9776-4997-9c5a-50a933b8c330'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5', 'scsi-SQEMU_QEMU_HARDDISK_a41b09bf-4033-4d86-9fc9-338370a7c5d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-29-05-26-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-29 06:18:54.916103 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916110 | orchestrator | 2025-09-29 06:18:54.916116 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-29 06:18:54.916123 | orchestrator | Monday 29 September 2025 06:16:57 +0000 (0:00:00.533) 0:00:15.391 ****** 2025-09-29 06:18:54.916129 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.916136 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.916142 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.916148 | orchestrator | 2025-09-29 06:18:54.916154 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-29 06:18:54.916160 | orchestrator | Monday 29 September 2025 06:16:58 +0000 (0:00:00.640) 0:00:16.031 ****** 2025-09-29 06:18:54.916168 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.916174 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.916180 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.916186 | orchestrator | 2025-09-29 06:18:54.916192 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-29 06:18:54.916198 | orchestrator | Monday 29 September 2025 06:16:58 +0000 (0:00:00.367) 0:00:16.398 ****** 2025-09-29 06:18:54.916203 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.916209 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.916214 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.916220 | orchestrator | 2025-09-29 06:18:54.916226 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-29 06:18:54.916232 | orchestrator | Monday 29 September 2025 06:16:59 +0000 (0:00:00.570) 0:00:16.969 ****** 2025-09-29 06:18:54.916238 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916244 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916275 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916282 | orchestrator | 2025-09-29 06:18:54.916288 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-29 06:18:54.916294 | orchestrator | Monday 29 September 2025 06:16:59 +0000 (0:00:00.269) 0:00:17.238 ****** 2025-09-29 06:18:54.916300 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916306 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916312 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916318 | orchestrator | 2025-09-29 06:18:54.916329 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-29 06:18:54.916335 | orchestrator | Monday 29 September 2025 06:16:59 +0000 (0:00:00.365) 0:00:17.604 ****** 2025-09-29 06:18:54.916342 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916348 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916354 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916361 | orchestrator | 2025-09-29 06:18:54.916368 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-29 06:18:54.916374 | orchestrator | Monday 29 September 2025 06:17:00 +0000 (0:00:00.494) 0:00:18.099 ****** 2025-09-29 06:18:54.916381 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-29 06:18:54.916387 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-29 06:18:54.916393 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-29 06:18:54.916400 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-29 06:18:54.916406 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-29 06:18:54.916413 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-29 06:18:54.916419 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-29 06:18:54.916425 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-29 06:18:54.916431 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-29 06:18:54.916438 | orchestrator | 2025-09-29 06:18:54.916444 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-29 06:18:54.916451 | orchestrator | Monday 29 September 2025 06:17:01 +0000 (0:00:00.847) 0:00:18.946 ****** 2025-09-29 06:18:54.916457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-29 06:18:54.916463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-29 06:18:54.916470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-29 06:18:54.916476 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-29 06:18:54.916488 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-29 06:18:54.916495 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-29 06:18:54.916501 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916507 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-29 06:18:54.916514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-29 06:18:54.916526 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-29 06:18:54.916533 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916539 | orchestrator | 2025-09-29 06:18:54.916546 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-29 06:18:54.916552 | orchestrator | Monday 29 September 2025 06:17:01 +0000 (0:00:00.378) 0:00:19.325 ****** 2025-09-29 06:18:54.916559 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:18:54.916566 | orchestrator | 2025-09-29 06:18:54.916573 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-29 06:18:54.916580 | orchestrator | Monday 29 September 2025 06:17:02 +0000 (0:00:00.671) 0:00:19.996 ****** 2025-09-29 06:18:54.916592 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916599 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916606 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916611 | orchestrator | 2025-09-29 06:18:54.916616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-29 06:18:54.916619 | orchestrator | Monday 29 September 2025 06:17:02 +0000 (0:00:00.342) 0:00:20.338 ****** 2025-09-29 06:18:54.916624 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916631 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916636 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916642 | orchestrator | 2025-09-29 06:18:54.916648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-29 06:18:54.916654 | orchestrator | Monday 29 September 2025 06:17:02 +0000 (0:00:00.304) 0:00:20.642 ****** 2025-09-29 06:18:54.916660 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916666 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916672 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:18:54.916678 | orchestrator | 2025-09-29 06:18:54.916683 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-29 06:18:54.916689 | orchestrator | Monday 29 September 2025 06:17:03 +0000 (0:00:00.310) 0:00:20.953 ****** 2025-09-29 06:18:54.916694 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.916700 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.916706 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.916711 | orchestrator | 2025-09-29 06:18:54.916717 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-29 06:18:54.916723 | orchestrator | Monday 29 September 2025 06:17:03 +0000 (0:00:00.592) 0:00:21.545 ****** 2025-09-29 06:18:54.916729 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:18:54.916736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:18:54.916741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:18:54.916747 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916753 | orchestrator | 2025-09-29 06:18:54.916759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-29 06:18:54.916765 | orchestrator | Monday 29 September 2025 06:17:04 +0000 (0:00:00.378) 0:00:21.923 ****** 2025-09-29 06:18:54.916771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:18:54.916777 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:18:54.916783 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:18:54.916789 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916796 | orchestrator | 2025-09-29 06:18:54.916800 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-29 06:18:54.916810 | orchestrator | Monday 29 September 2025 06:17:04 +0000 (0:00:00.365) 0:00:22.289 ****** 2025-09-29 06:18:54.916814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-29 06:18:54.916818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-29 06:18:54.916826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-29 06:18:54.916830 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916834 | orchestrator | 2025-09-29 06:18:54.916837 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-29 06:18:54.916841 | orchestrator | Monday 29 September 2025 06:17:04 +0000 (0:00:00.368) 0:00:22.658 ****** 2025-09-29 06:18:54.916845 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:18:54.916848 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:18:54.916852 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:18:54.916856 | orchestrator | 2025-09-29 06:18:54.916860 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-29 06:18:54.916863 | orchestrator | Monday 29 September 2025 06:17:05 +0000 (0:00:00.308) 0:00:22.966 ****** 2025-09-29 06:18:54.916867 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-29 06:18:54.916871 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-29 06:18:54.916874 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-29 06:18:54.916878 | orchestrator | 2025-09-29 06:18:54.916882 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-29 06:18:54.916885 | orchestrator | Monday 29 September 2025 06:17:05 +0000 (0:00:00.498) 0:00:23.465 ****** 2025-09-29 06:18:54.916889 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-29 06:18:54.916893 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:18:54.916897 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:18:54.916900 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-29 06:18:54.916904 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-29 06:18:54.916908 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-29 06:18:54.916911 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-29 06:18:54.916915 | orchestrator | 2025-09-29 06:18:54.916919 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-29 06:18:54.916922 | orchestrator | Monday 29 September 2025 06:17:06 +0000 (0:00:01.029) 0:00:24.494 ****** 2025-09-29 06:18:54.916926 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-29 06:18:54.916930 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-29 06:18:54.916933 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-29 06:18:54.916937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-29 06:18:54.916940 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-29 06:18:54.916944 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-29 06:18:54.916952 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-29 06:18:54.916956 | orchestrator | 2025-09-29 06:18:54.916960 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-29 06:18:54.916964 | orchestrator | Monday 29 September 2025 06:17:08 +0000 (0:00:01.922) 0:00:26.417 ****** 2025-09-29 06:18:54.916967 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:18:54.916971 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:18:54.916976 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-29 06:18:54.916982 | orchestrator | 2025-09-29 06:18:54.916989 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-29 06:18:54.916995 | orchestrator | Monday 29 September 2025 06:17:09 +0000 (0:00:00.363) 0:00:26.780 ****** 2025-09-29 06:18:54.917002 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:18:54.917016 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:18:54.917022 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:18:54.917028 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:18:54.917037 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-29 06:18:54.917044 | orchestrator | 2025-09-29 06:18:54.917050 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-29 06:18:54.917056 | orchestrator | Monday 29 September 2025 06:17:56 +0000 (0:00:47.858) 0:01:14.639 ****** 2025-09-29 06:18:54.917062 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917068 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917080 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917084 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917087 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917091 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-29 06:18:54.917095 | orchestrator | 2025-09-29 06:18:54.917098 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-29 06:18:54.917102 | orchestrator | Monday 29 September 2025 06:18:23 +0000 (0:00:26.185) 0:01:40.825 ****** 2025-09-29 06:18:54.917106 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917109 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917113 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917117 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917124 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917128 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-29 06:18:54.917132 | orchestrator | 2025-09-29 06:18:54.917135 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-29 06:18:54.917139 | orchestrator | Monday 29 September 2025 06:18:35 +0000 (0:00:12.735) 0:01:53.561 ****** 2025-09-29 06:18:54.917143 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917147 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917153 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917159 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917176 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917182 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917189 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917194 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917201 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917207 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917213 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917219 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917226 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917232 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917238 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917244 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-29 06:18:54.917266 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-29 06:18:54.917272 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-29 06:18:54.917278 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-29 06:18:54.917284 | orchestrator | 2025-09-29 06:18:54.917291 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:18:54.917297 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-29 06:18:54.917305 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-29 06:18:54.917311 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-29 06:18:54.917317 | orchestrator | 2025-09-29 06:18:54.917324 | orchestrator | 2025-09-29 06:18:54.917330 | orchestrator | 2025-09-29 06:18:54.917337 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:18:54.917347 | orchestrator | Monday 29 September 2025 06:18:53 +0000 (0:00:17.410) 0:02:10.971 ****** 2025-09-29 06:18:54.917353 | orchestrator | =============================================================================== 2025-09-29 06:18:54.917359 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.86s 2025-09-29 06:18:54.917365 | orchestrator | generate keys ---------------------------------------------------------- 26.19s 2025-09-29 06:18:54.917372 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.41s 2025-09-29 06:18:54.917378 | orchestrator | get keys from monitors ------------------------------------------------- 12.74s 2025-09-29 06:18:54.917384 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.06s 2025-09-29 06:18:54.917390 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.92s 2025-09-29 06:18:54.917394 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.63s 2025-09-29 06:18:54.917397 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.03s 2025-09-29 06:18:54.917401 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2025-09-29 06:18:54.917404 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.71s 2025-09-29 06:18:54.917408 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2025-09-29 06:18:54.917412 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-09-29 06:18:54.917419 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-09-29 06:18:54.917423 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-09-29 06:18:54.917427 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-09-29 06:18:54.917430 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.59s 2025-09-29 06:18:54.917434 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.57s 2025-09-29 06:18:54.917438 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.55s 2025-09-29 06:18:54.917441 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.53s 2025-09-29 06:18:54.917445 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.52s 2025-09-29 06:18:54.917449 | orchestrator | 2025-09-29 06:18:54 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:54.917453 | orchestrator | 2025-09-29 06:18:54 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:54.917985 | orchestrator | 2025-09-29 06:18:54 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:18:54.918283 | orchestrator | 2025-09-29 06:18:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:18:57.965163 | orchestrator | 2025-09-29 06:18:57 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:18:57.967692 | orchestrator | 2025-09-29 06:18:57 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:18:57.970087 | orchestrator | 2025-09-29 06:18:57 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:18:57.970300 | orchestrator | 2025-09-29 06:18:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:01.009598 | orchestrator | 2025-09-29 06:19:01 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:01.010947 | orchestrator | 2025-09-29 06:19:01 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:01.012736 | orchestrator | 2025-09-29 06:19:01 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:01.013202 | orchestrator | 2025-09-29 06:19:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:04.058630 | orchestrator | 2025-09-29 06:19:04 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:04.061720 | orchestrator | 2025-09-29 06:19:04 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:04.063739 | orchestrator | 2025-09-29 06:19:04 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:04.063876 | orchestrator | 2025-09-29 06:19:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:07.100656 | orchestrator | 2025-09-29 06:19:07 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:07.104670 | orchestrator | 2025-09-29 06:19:07 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:07.106182 | orchestrator | 2025-09-29 06:19:07 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:07.106947 | orchestrator | 2025-09-29 06:19:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:10.164208 | orchestrator | 2025-09-29 06:19:10 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:10.167109 | orchestrator | 2025-09-29 06:19:10 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:10.169661 | orchestrator | 2025-09-29 06:19:10 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:10.169725 | orchestrator | 2025-09-29 06:19:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:13.223697 | orchestrator | 2025-09-29 06:19:13 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:13.229429 | orchestrator | 2025-09-29 06:19:13 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:13.232730 | orchestrator | 2025-09-29 06:19:13 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:13.232805 | orchestrator | 2025-09-29 06:19:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:16.282974 | orchestrator | 2025-09-29 06:19:16 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:16.283488 | orchestrator | 2025-09-29 06:19:16 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:16.284247 | orchestrator | 2025-09-29 06:19:16 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:16.284304 | orchestrator | 2025-09-29 06:19:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:19.337865 | orchestrator | 2025-09-29 06:19:19 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:19.339844 | orchestrator | 2025-09-29 06:19:19 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:19.342626 | orchestrator | 2025-09-29 06:19:19 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:19.342708 | orchestrator | 2025-09-29 06:19:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:22.402592 | orchestrator | 2025-09-29 06:19:22 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state STARTED 2025-09-29 06:19:22.403725 | orchestrator | 2025-09-29 06:19:22 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:22.405695 | orchestrator | 2025-09-29 06:19:22 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:22.405716 | orchestrator | 2025-09-29 06:19:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:25.453780 | orchestrator | 2025-09-29 06:19:25.453900 | orchestrator | 2025-09-29 06:19:25.453919 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:19:25.453933 | orchestrator | 2025-09-29 06:19:25.453944 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:19:25.453956 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.234) 0:00:00.234 ****** 2025-09-29 06:19:25.453965 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.453973 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.453980 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.453987 | orchestrator | 2025-09-29 06:19:25.453994 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:19:25.454001 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.250) 0:00:00.484 ****** 2025-09-29 06:19:25.454008 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-29 06:19:25.454068 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-29 06:19:25.454081 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-29 06:19:25.454088 | orchestrator | 2025-09-29 06:19:25.454095 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-29 06:19:25.454101 | orchestrator | 2025-09-29 06:19:25.454108 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-29 06:19:25.454193 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.341) 0:00:00.826 ****** 2025-09-29 06:19:25.454226 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:19:25.454234 | orchestrator | 2025-09-29 06:19:25.454241 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-29 06:19:25.454248 | orchestrator | Monday 29 September 2025 06:17:44 +0000 (0:00:00.428) 0:00:01.254 ****** 2025-09-29 06:19:25.454521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.454571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.454605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.454615 | orchestrator | 2025-09-29 06:19:25.454624 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-29 06:19:25.454632 | orchestrator | Monday 29 September 2025 06:17:45 +0000 (0:00:00.993) 0:00:02.247 ****** 2025-09-29 06:19:25.454640 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.454648 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.454656 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.454663 | orchestrator | 2025-09-29 06:19:25.454671 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-29 06:19:25.454679 | orchestrator | Monday 29 September 2025 06:17:45 +0000 (0:00:00.339) 0:00:02.587 ****** 2025-09-29 06:19:25.454686 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-29 06:19:25.454701 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-29 06:19:25.454709 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-29 06:19:25.454717 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-29 06:19:25.454725 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-29 06:19:25.454737 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-29 06:19:25.454745 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-29 06:19:25.454753 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-29 06:19:25.454760 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-29 06:19:25.454766 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-29 06:19:25.454773 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-29 06:19:25.454779 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-29 06:19:25.454786 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-29 06:19:25.454792 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-29 06:19:25.454799 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-29 06:19:25.454807 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-29 06:19:25.454818 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-29 06:19:25.454829 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-29 06:19:25.454839 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-29 06:19:25.454845 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-29 06:19:25.454852 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-29 06:19:25.454858 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-29 06:19:25.454873 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-29 06:19:25.454880 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-29 06:19:25.454888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-29 06:19:25.454896 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-29 06:19:25.454903 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-29 06:19:25.454910 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-29 06:19:25.454916 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-29 06:19:25.454923 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-29 06:19:25.454930 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-29 06:19:25.454936 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-29 06:19:25.454943 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-29 06:19:25.454950 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-29 06:19:25.454961 | orchestrator | 2025-09-29 06:19:25.454968 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.454975 | orchestrator | Monday 29 September 2025 06:17:46 +0000 (0:00:00.627) 0:00:03.214 ****** 2025-09-29 06:19:25.454981 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.454988 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.454994 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455001 | orchestrator | 2025-09-29 06:19:25.455007 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455014 | orchestrator | Monday 29 September 2025 06:17:46 +0000 (0:00:00.253) 0:00:03.467 ****** 2025-09-29 06:19:25.455021 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455027 | orchestrator | 2025-09-29 06:19:25.455038 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455045 | orchestrator | Monday 29 September 2025 06:17:46 +0000 (0:00:00.118) 0:00:03.585 ****** 2025-09-29 06:19:25.455051 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455058 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455064 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455071 | orchestrator | 2025-09-29 06:19:25.455078 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455084 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.341) 0:00:03.927 ****** 2025-09-29 06:19:25.455091 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455097 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455104 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455111 | orchestrator | 2025-09-29 06:19:25.455117 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455124 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.271) 0:00:04.198 ****** 2025-09-29 06:19:25.455130 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455137 | orchestrator | 2025-09-29 06:19:25.455144 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455150 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.103) 0:00:04.302 ****** 2025-09-29 06:19:25.455157 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455164 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455170 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455177 | orchestrator | 2025-09-29 06:19:25.455183 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455190 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.245) 0:00:04.547 ****** 2025-09-29 06:19:25.455197 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455203 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455210 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455216 | orchestrator | 2025-09-29 06:19:25.455223 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455229 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.235) 0:00:04.783 ****** 2025-09-29 06:19:25.455236 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455243 | orchestrator | 2025-09-29 06:19:25.455249 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455256 | orchestrator | Monday 29 September 2025 06:17:48 +0000 (0:00:00.113) 0:00:04.896 ****** 2025-09-29 06:19:25.455262 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455294 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455304 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455315 | orchestrator | 2025-09-29 06:19:25.455326 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455342 | orchestrator | Monday 29 September 2025 06:17:48 +0000 (0:00:00.401) 0:00:05.297 ****** 2025-09-29 06:19:25.455352 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455363 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455375 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455385 | orchestrator | 2025-09-29 06:19:25.455405 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455417 | orchestrator | Monday 29 September 2025 06:17:48 +0000 (0:00:00.251) 0:00:05.549 ****** 2025-09-29 06:19:25.455427 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455439 | orchestrator | 2025-09-29 06:19:25.455446 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455452 | orchestrator | Monday 29 September 2025 06:17:48 +0000 (0:00:00.114) 0:00:05.664 ****** 2025-09-29 06:19:25.455459 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455465 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455472 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455478 | orchestrator | 2025-09-29 06:19:25.455485 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455491 | orchestrator | Monday 29 September 2025 06:17:49 +0000 (0:00:00.244) 0:00:05.908 ****** 2025-09-29 06:19:25.455498 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455504 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455511 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455517 | orchestrator | 2025-09-29 06:19:25.455524 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455530 | orchestrator | Monday 29 September 2025 06:17:49 +0000 (0:00:00.462) 0:00:06.371 ****** 2025-09-29 06:19:25.455536 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455543 | orchestrator | 2025-09-29 06:19:25.455549 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455556 | orchestrator | Monday 29 September 2025 06:17:49 +0000 (0:00:00.126) 0:00:06.497 ****** 2025-09-29 06:19:25.455562 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455569 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455575 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455582 | orchestrator | 2025-09-29 06:19:25.455588 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455595 | orchestrator | Monday 29 September 2025 06:17:49 +0000 (0:00:00.278) 0:00:06.775 ****** 2025-09-29 06:19:25.455601 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455608 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455614 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455621 | orchestrator | 2025-09-29 06:19:25.455627 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455634 | orchestrator | Monday 29 September 2025 06:17:50 +0000 (0:00:00.298) 0:00:07.073 ****** 2025-09-29 06:19:25.455640 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455647 | orchestrator | 2025-09-29 06:19:25.455653 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455660 | orchestrator | Monday 29 September 2025 06:17:50 +0000 (0:00:00.128) 0:00:07.202 ****** 2025-09-29 06:19:25.455666 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455673 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455679 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455685 | orchestrator | 2025-09-29 06:19:25.455692 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455704 | orchestrator | Monday 29 September 2025 06:17:50 +0000 (0:00:00.288) 0:00:07.491 ****** 2025-09-29 06:19:25.455711 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455717 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455724 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455730 | orchestrator | 2025-09-29 06:19:25.455737 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455745 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:00.543) 0:00:08.034 ****** 2025-09-29 06:19:25.455756 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455768 | orchestrator | 2025-09-29 06:19:25.455777 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455784 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:00.131) 0:00:08.166 ****** 2025-09-29 06:19:25.455797 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455803 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455810 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455817 | orchestrator | 2025-09-29 06:19:25.455823 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455830 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:00.275) 0:00:08.441 ****** 2025-09-29 06:19:25.455836 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455843 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455849 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455856 | orchestrator | 2025-09-29 06:19:25.455863 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455869 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:00.260) 0:00:08.702 ****** 2025-09-29 06:19:25.455876 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455882 | orchestrator | 2025-09-29 06:19:25.455889 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.455896 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:00.103) 0:00:08.805 ****** 2025-09-29 06:19:25.455902 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455909 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.455915 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.455922 | orchestrator | 2025-09-29 06:19:25.455929 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.455935 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.236) 0:00:09.042 ****** 2025-09-29 06:19:25.455942 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.455948 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.455955 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.455961 | orchestrator | 2025-09-29 06:19:25.455968 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.455974 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.412) 0:00:09.455 ****** 2025-09-29 06:19:25.455981 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.455987 | orchestrator | 2025-09-29 06:19:25.455994 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.456005 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.111) 0:00:09.567 ****** 2025-09-29 06:19:25.456012 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456018 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456025 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456031 | orchestrator | 2025-09-29 06:19:25.456038 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-29 06:19:25.456044 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.262) 0:00:09.830 ****** 2025-09-29 06:19:25.456051 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:19:25.456057 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:19:25.456064 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:19:25.456070 | orchestrator | 2025-09-29 06:19:25.456081 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-29 06:19:25.456093 | orchestrator | Monday 29 September 2025 06:17:53 +0000 (0:00:00.250) 0:00:10.080 ****** 2025-09-29 06:19:25.456102 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456109 | orchestrator | 2025-09-29 06:19:25.456116 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-29 06:19:25.456122 | orchestrator | Monday 29 September 2025 06:17:53 +0000 (0:00:00.129) 0:00:10.210 ****** 2025-09-29 06:19:25.456129 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456135 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456142 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456148 | orchestrator | 2025-09-29 06:19:25.456155 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-29 06:19:25.456162 | orchestrator | Monday 29 September 2025 06:17:53 +0000 (0:00:00.381) 0:00:10.591 ****** 2025-09-29 06:19:25.456174 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:19:25.456180 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:19:25.456187 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:19:25.456193 | orchestrator | 2025-09-29 06:19:25.456200 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-29 06:19:25.456206 | orchestrator | Monday 29 September 2025 06:17:55 +0000 (0:00:01.553) 0:00:12.145 ****** 2025-09-29 06:19:25.456213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-29 06:19:25.456220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-29 06:19:25.456226 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-29 06:19:25.456233 | orchestrator | 2025-09-29 06:19:25.456239 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-29 06:19:25.456246 | orchestrator | Monday 29 September 2025 06:17:56 +0000 (0:00:01.424) 0:00:13.569 ****** 2025-09-29 06:19:25.456253 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-29 06:19:25.456260 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-29 06:19:25.456289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-29 06:19:25.456296 | orchestrator | 2025-09-29 06:19:25.456308 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-29 06:19:25.456325 | orchestrator | Monday 29 September 2025 06:17:58 +0000 (0:00:02.128) 0:00:15.698 ****** 2025-09-29 06:19:25.456335 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-29 06:19:25.456346 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-29 06:19:25.456358 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-29 06:19:25.456369 | orchestrator | 2025-09-29 06:19:25.456379 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-29 06:19:25.456389 | orchestrator | Monday 29 September 2025 06:18:00 +0000 (0:00:02.023) 0:00:17.721 ****** 2025-09-29 06:19:25.456401 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456412 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456423 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456434 | orchestrator | 2025-09-29 06:19:25.456440 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-29 06:19:25.456447 | orchestrator | Monday 29 September 2025 06:18:01 +0000 (0:00:00.312) 0:00:18.034 ****** 2025-09-29 06:19:25.456453 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456460 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456466 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456473 | orchestrator | 2025-09-29 06:19:25.456483 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-29 06:19:25.456494 | orchestrator | Monday 29 September 2025 06:18:01 +0000 (0:00:00.291) 0:00:18.326 ****** 2025-09-29 06:19:25.456505 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:19:25.456516 | orchestrator | 2025-09-29 06:19:25.456526 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-29 06:19:25.456537 | orchestrator | Monday 29 September 2025 06:18:02 +0000 (0:00:00.594) 0:00:18.920 ****** 2025-09-29 06:19:25.456558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456617 | orchestrator | 2025-09-29 06:19:25.456624 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-29 06:19:25.456630 | orchestrator | Monday 29 September 2025 06:18:03 +0000 (0:00:01.786) 0:00:20.707 ****** 2025-09-29 06:19:25.456647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456661 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']2025-09-29 06:19:25 | INFO  | Task ae00de38-11a9-4ebe-83e7-7fbfd7cd6cd2 is in state SUCCESS 2025-09-29 06:19:25.456681 | orchestrator | 2025-09-29 06:19:25 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:25.456688 | orchestrator | }, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456695 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456718 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456725 | orchestrator | 2025-09-29 06:19:25.456731 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-29 06:19:25.456738 | orchestrator | Monday 29 September 2025 06:18:04 +0000 (0:00:00.715) 0:00:21.422 ****** 2025-09-29 06:19:25.456750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456758 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456787 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-29 06:19:25.456818 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456835 | orchestrator | 2025-09-29 06:19:25.456844 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-29 06:19:25.456853 | orchestrator | Monday 29 September 2025 06:18:05 +0000 (0:00:00.781) 0:00:22.204 ****** 2025-09-29 06:19:25.456867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-29 06:19:25.456933 | orchestrator | 2025-09-29 06:19:25.456944 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-29 06:19:25.456953 | orchestrator | Monday 29 September 2025 06:18:06 +0000 (0:00:01.452) 0:00:23.657 ****** 2025-09-29 06:19:25.456962 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:19:25.456973 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:19:25.456982 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:19:25.456992 | orchestrator | 2025-09-29 06:19:25.457003 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-29 06:19:25.457013 | orchestrator | Monday 29 September 2025 06:18:07 +0000 (0:00:00.341) 0:00:23.999 ****** 2025-09-29 06:19:25.457030 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:19:25.457040 | orchestrator | 2025-09-29 06:19:25.457050 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-29 06:19:25.457059 | orchestrator | Monday 29 September 2025 06:18:07 +0000 (0:00:00.521) 0:00:24.520 ****** 2025-09-29 06:19:25.457069 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:19:25.457079 | orchestrator | 2025-09-29 06:19:25.457089 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-29 06:19:25.457100 | orchestrator | Monday 29 September 2025 06:18:10 +0000 (0:00:02.706) 0:00:27.227 ****** 2025-09-29 06:19:25.457120 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:19:25.457132 | orchestrator | 2025-09-29 06:19:25.457144 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-29 06:19:25.457155 | orchestrator | Monday 29 September 2025 06:18:13 +0000 (0:00:02.947) 0:00:30.174 ****** 2025-09-29 06:19:25.457166 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:19:25.457178 | orchestrator | 2025-09-29 06:19:25.457188 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-29 06:19:25.457199 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:17.277) 0:00:47.451 ****** 2025-09-29 06:19:25.457209 | orchestrator | 2025-09-29 06:19:25.457220 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-29 06:19:25.457231 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:00.061) 0:00:47.513 ****** 2025-09-29 06:19:25.457241 | orchestrator | 2025-09-29 06:19:25.457253 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-29 06:19:25.457263 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:00.058) 0:00:47.572 ****** 2025-09-29 06:19:25.457307 | orchestrator | 2025-09-29 06:19:25.457318 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-29 06:19:25.457330 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:00.063) 0:00:47.635 ****** 2025-09-29 06:19:25.457341 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:19:25.457351 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:19:25.457362 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:19:25.457373 | orchestrator | 2025-09-29 06:19:25.457384 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:19:25.457396 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-29 06:19:25.457408 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-29 06:19:25.457427 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-29 06:19:25.457439 | orchestrator | 2025-09-29 06:19:25.457450 | orchestrator | 2025-09-29 06:19:25.457462 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:19:25.457474 | orchestrator | Monday 29 September 2025 06:19:24 +0000 (0:00:53.260) 0:01:40.896 ****** 2025-09-29 06:19:25.457485 | orchestrator | =============================================================================== 2025-09-29 06:19:25.457496 | orchestrator | horizon : Restart horizon container ------------------------------------ 53.26s 2025-09-29 06:19:25.457508 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.28s 2025-09-29 06:19:25.457519 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.95s 2025-09-29 06:19:25.457530 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.71s 2025-09-29 06:19:25.457540 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.13s 2025-09-29 06:19:25.457551 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.02s 2025-09-29 06:19:25.457562 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.79s 2025-09-29 06:19:25.457574 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.55s 2025-09-29 06:19:25.457586 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.45s 2025-09-29 06:19:25.457597 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.42s 2025-09-29 06:19:25.457609 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.99s 2025-09-29 06:19:25.457620 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.78s 2025-09-29 06:19:25.457632 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2025-09-29 06:19:25.457652 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-09-29 06:19:25.457664 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-09-29 06:19:25.457676 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-09-29 06:19:25.457687 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-29 06:19:25.457699 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-09-29 06:19:25.457711 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.43s 2025-09-29 06:19:25.457722 | orchestrator | horizon : Update policy file name --------------------------------------- 0.41s 2025-09-29 06:19:25.457734 | orchestrator | 2025-09-29 06:19:25 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:25.457754 | orchestrator | 2025-09-29 06:19:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:28.509818 | orchestrator | 2025-09-29 06:19:28 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:28.511473 | orchestrator | 2025-09-29 06:19:28 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state STARTED 2025-09-29 06:19:28.511548 | orchestrator | 2025-09-29 06:19:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:31.558738 | orchestrator | 2025-09-29 06:19:31 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:31.559139 | orchestrator | 2025-09-29 06:19:31 | INFO  | Task 09aabd4a-6504-4a85-91c7-caf8e5bdd5ba is in state SUCCESS 2025-09-29 06:19:31.559350 | orchestrator | 2025-09-29 06:19:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:34.620515 | orchestrator | 2025-09-29 06:19:34 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:34.622135 | orchestrator | 2025-09-29 06:19:34 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:34.622180 | orchestrator | 2025-09-29 06:19:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:37.669612 | orchestrator | 2025-09-29 06:19:37 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:37.671530 | orchestrator | 2025-09-29 06:19:37 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:37.671594 | orchestrator | 2025-09-29 06:19:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:40.718339 | orchestrator | 2025-09-29 06:19:40 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:40.723878 | orchestrator | 2025-09-29 06:19:40 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:40.723937 | orchestrator | 2025-09-29 06:19:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:43.765705 | orchestrator | 2025-09-29 06:19:43 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:43.767463 | orchestrator | 2025-09-29 06:19:43 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:43.767507 | orchestrator | 2025-09-29 06:19:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:46.802590 | orchestrator | 2025-09-29 06:19:46 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:46.803574 | orchestrator | 2025-09-29 06:19:46 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:46.803605 | orchestrator | 2025-09-29 06:19:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:49.850395 | orchestrator | 2025-09-29 06:19:49 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:49.853278 | orchestrator | 2025-09-29 06:19:49 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:49.853329 | orchestrator | 2025-09-29 06:19:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:52.896057 | orchestrator | 2025-09-29 06:19:52 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:52.897411 | orchestrator | 2025-09-29 06:19:52 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:52.897443 | orchestrator | 2025-09-29 06:19:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:55.935598 | orchestrator | 2025-09-29 06:19:55 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:55.938325 | orchestrator | 2025-09-29 06:19:55 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:55.938379 | orchestrator | 2025-09-29 06:19:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:19:58.980715 | orchestrator | 2025-09-29 06:19:58 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:19:58.982475 | orchestrator | 2025-09-29 06:19:58 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:19:58.982573 | orchestrator | 2025-09-29 06:19:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:02.032957 | orchestrator | 2025-09-29 06:20:02 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:02.034563 | orchestrator | 2025-09-29 06:20:02 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:02.034638 | orchestrator | 2025-09-29 06:20:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:05.077784 | orchestrator | 2025-09-29 06:20:05 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:05.079448 | orchestrator | 2025-09-29 06:20:05 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:05.079493 | orchestrator | 2025-09-29 06:20:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:08.117260 | orchestrator | 2025-09-29 06:20:08 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:08.119875 | orchestrator | 2025-09-29 06:20:08 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:08.119941 | orchestrator | 2025-09-29 06:20:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:11.158814 | orchestrator | 2025-09-29 06:20:11 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:11.160287 | orchestrator | 2025-09-29 06:20:11 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:11.160367 | orchestrator | 2025-09-29 06:20:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:14.204895 | orchestrator | 2025-09-29 06:20:14 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:14.206759 | orchestrator | 2025-09-29 06:20:14 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:14.206822 | orchestrator | 2025-09-29 06:20:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:17.248985 | orchestrator | 2025-09-29 06:20:17 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:17.249666 | orchestrator | 2025-09-29 06:20:17 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:17.249705 | orchestrator | 2025-09-29 06:20:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:20.293752 | orchestrator | 2025-09-29 06:20:20 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:20.294161 | orchestrator | 2025-09-29 06:20:20 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:20.294213 | orchestrator | 2025-09-29 06:20:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:23.348630 | orchestrator | 2025-09-29 06:20:23 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:23.349847 | orchestrator | 2025-09-29 06:20:23 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:23.349889 | orchestrator | 2025-09-29 06:20:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:26.398179 | orchestrator | 2025-09-29 06:20:26 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:26.401017 | orchestrator | 2025-09-29 06:20:26 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:26.402203 | orchestrator | 2025-09-29 06:20:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:29.445045 | orchestrator | 2025-09-29 06:20:29 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state STARTED 2025-09-29 06:20:29.445813 | orchestrator | 2025-09-29 06:20:29 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state STARTED 2025-09-29 06:20:29.445858 | orchestrator | 2025-09-29 06:20:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:32.483280 | orchestrator | 2025-09-29 06:20:32.483443 | orchestrator | 2025-09-29 06:20:32.483462 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-29 06:20:32.483474 | orchestrator | 2025-09-29 06:20:32.483484 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2025-09-29 06:20:32.483494 | orchestrator | Monday 29 September 2025 06:18:57 +0000 (0:00:00.191) 0:00:00.191 ****** 2025-09-29 06:20:32.483504 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-29 06:20:32.483515 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483525 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483534 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:20:32.483544 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483554 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-29 06:20:32.483563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-29 06:20:32.483572 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-29 06:20:32.483582 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-29 06:20:32.483591 | orchestrator | 2025-09-29 06:20:32.483601 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-29 06:20:32.483610 | orchestrator | Monday 29 September 2025 06:19:02 +0000 (0:00:04.752) 0:00:04.944 ****** 2025-09-29 06:20:32.483620 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-29 06:20:32.483630 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483649 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:20:32.483681 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483692 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-29 06:20:32.483701 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-29 06:20:32.483712 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-29 06:20:32.483728 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-29 06:20:32.483743 | orchestrator | 2025-09-29 06:20:32.483759 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-29 06:20:32.483774 | orchestrator | Monday 29 September 2025 06:19:06 +0000 (0:00:04.244) 0:00:09.189 ****** 2025-09-29 06:20:32.483791 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-29 06:20:32.483806 | orchestrator | 2025-09-29 06:20:32.483820 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-29 06:20:32.483836 | orchestrator | Monday 29 September 2025 06:19:07 +0000 (0:00:01.023) 0:00:10.212 ****** 2025-09-29 06:20:32.483853 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-29 06:20:32.483869 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483885 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483902 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:20:32.483936 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.483951 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-29 06:20:32.483965 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-29 06:20:32.483979 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-29 06:20:32.483994 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-29 06:20:32.484009 | orchestrator | 2025-09-29 06:20:32.484024 | orchestrator | TASK [Check if target directories exist] *************************************** 2025-09-29 06:20:32.484038 | orchestrator | Monday 29 September 2025 06:19:20 +0000 (0:00:13.131) 0:00:23.344 ****** 2025-09-29 06:20:32.484052 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2025-09-29 06:20:32.484066 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2025-09-29 06:20:32.484081 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-29 06:20:32.484096 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2025-09-29 06:20:32.484136 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-29 06:20:32.484155 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2025-09-29 06:20:32.484171 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2025-09-29 06:20:32.484187 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2025-09-29 06:20:32.484203 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2025-09-29 06:20:32.484219 | orchestrator | 2025-09-29 06:20:32.484234 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-29 06:20:32.484249 | orchestrator | Monday 29 September 2025 06:19:24 +0000 (0:00:03.314) 0:00:26.659 ****** 2025-09-29 06:20:32.484265 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-29 06:20:32.484296 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.484312 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.484405 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:20:32.484425 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-29 06:20:32.484440 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-29 06:20:32.484456 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-29 06:20:32.484472 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-29 06:20:32.484489 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-29 06:20:32.484505 | orchestrator | 2025-09-29 06:20:32.484520 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:20:32.484537 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:20:32.484549 | orchestrator | 2025-09-29 06:20:32.484559 | orchestrator | 2025-09-29 06:20:32.484568 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:20:32.484578 | orchestrator | Monday 29 September 2025 06:19:30 +0000 (0:00:06.424) 0:00:33.084 ****** 2025-09-29 06:20:32.484587 | orchestrator | =============================================================================== 2025-09-29 06:20:32.484596 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.13s 2025-09-29 06:20:32.484606 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.42s 2025-09-29 06:20:32.484615 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.75s 2025-09-29 06:20:32.484624 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.24s 2025-09-29 06:20:32.484634 | orchestrator | Check if target directories exist --------------------------------------- 3.31s 2025-09-29 06:20:32.484643 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-09-29 06:20:32.484652 | orchestrator | 2025-09-29 06:20:32.484661 | orchestrator | 2025-09-29 06:20:32.484671 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-29 06:20:32.484681 | orchestrator | 2025-09-29 06:20:32.484690 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-29 06:20:32.484699 | orchestrator | Monday 29 September 2025 06:19:35 +0000 (0:00:00.259) 0:00:00.259 ****** 2025-09-29 06:20:32.484709 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-29 06:20:32.484720 | orchestrator | 2025-09-29 06:20:32.484729 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-29 06:20:32.484738 | orchestrator | Monday 29 September 2025 06:19:35 +0000 (0:00:00.227) 0:00:00.486 ****** 2025-09-29 06:20:32.484748 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-29 06:20:32.484757 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-29 06:20:32.484776 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-29 06:20:32.484785 | orchestrator | 2025-09-29 06:20:32.484795 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-29 06:20:32.484804 | orchestrator | Monday 29 September 2025 06:19:36 +0000 (0:00:01.145) 0:00:01.632 ****** 2025-09-29 06:20:32.484814 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-29 06:20:32.484823 | orchestrator | 2025-09-29 06:20:32.484833 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-29 06:20:32.484842 | orchestrator | Monday 29 September 2025 06:19:37 +0000 (0:00:01.002) 0:00:02.635 ****** 2025-09-29 06:20:32.484851 | orchestrator | changed: [testbed-manager] 2025-09-29 06:20:32.484878 | orchestrator | 2025-09-29 06:20:32.484895 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-29 06:20:32.484911 | orchestrator | Monday 29 September 2025 06:19:38 +0000 (0:00:00.871) 0:00:03.506 ****** 2025-09-29 06:20:32.484925 | orchestrator | changed: [testbed-manager] 2025-09-29 06:20:32.484941 | orchestrator | 2025-09-29 06:20:32.484957 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-29 06:20:32.484971 | orchestrator | Monday 29 September 2025 06:19:39 +0000 (0:00:00.783) 0:00:04.290 ****** 2025-09-29 06:20:32.484985 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-29 06:20:32.484998 | orchestrator | ok: [testbed-manager] 2025-09-29 06:20:32.485208 | orchestrator | 2025-09-29 06:20:32.485227 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-29 06:20:32.485260 | orchestrator | Monday 29 September 2025 06:20:20 +0000 (0:00:40.971) 0:00:45.261 ****** 2025-09-29 06:20:32.485277 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-29 06:20:32.485294 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-29 06:20:32.485310 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-29 06:20:32.485352 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-29 06:20:32.485370 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-29 06:20:32.485386 | orchestrator | 2025-09-29 06:20:32.485402 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-29 06:20:32.485419 | orchestrator | Monday 29 September 2025 06:20:24 +0000 (0:00:04.034) 0:00:49.296 ****** 2025-09-29 06:20:32.485434 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-29 06:20:32.485451 | orchestrator | 2025-09-29 06:20:32.485468 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-29 06:20:32.485485 | orchestrator | Monday 29 September 2025 06:20:24 +0000 (0:00:00.478) 0:00:49.774 ****** 2025-09-29 06:20:32.485500 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:20:32.485515 | orchestrator | 2025-09-29 06:20:32.485525 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-29 06:20:32.485535 | orchestrator | Monday 29 September 2025 06:20:24 +0000 (0:00:00.134) 0:00:49.909 ****** 2025-09-29 06:20:32.485544 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:20:32.485553 | orchestrator | 2025-09-29 06:20:32.485563 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-29 06:20:32.485572 | orchestrator | Monday 29 September 2025 06:20:25 +0000 (0:00:00.495) 0:00:50.404 ****** 2025-09-29 06:20:32.485582 | orchestrator | changed: [testbed-manager] 2025-09-29 06:20:32.485591 | orchestrator | 2025-09-29 06:20:32.485601 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-29 06:20:32.485610 | orchestrator | Monday 29 September 2025 06:20:26 +0000 (0:00:01.749) 0:00:52.154 ****** 2025-09-29 06:20:32.485620 | orchestrator | changed: [testbed-manager] 2025-09-29 06:20:32.485629 | orchestrator | 2025-09-29 06:20:32.485639 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-29 06:20:32.485648 | orchestrator | Monday 29 September 2025 06:20:27 +0000 (0:00:00.774) 0:00:52.929 ****** 2025-09-29 06:20:32.485658 | orchestrator | changed: [testbed-manager] 2025-09-29 06:20:32.485667 | orchestrator | 2025-09-29 06:20:32.485676 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-29 06:20:32.485686 | orchestrator | Monday 29 September 2025 06:20:28 +0000 (0:00:00.652) 0:00:53.582 ****** 2025-09-29 06:20:32.485695 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-29 06:20:32.485704 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-29 06:20:32.485714 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-29 06:20:32.485723 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-29 06:20:32.485733 | orchestrator | 2025-09-29 06:20:32.485743 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:20:32.485765 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:20:32.485775 | orchestrator | 2025-09-29 06:20:32.485785 | orchestrator | 2025-09-29 06:20:32.485796 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:20:32.485807 | orchestrator | Monday 29 September 2025 06:20:29 +0000 (0:00:01.462) 0:00:55.044 ****** 2025-09-29 06:20:32.485818 | orchestrator | =============================================================================== 2025-09-29 06:20:32.485829 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.97s 2025-09-29 06:20:32.485840 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.03s 2025-09-29 06:20:32.485851 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.75s 2025-09-29 06:20:32.485861 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.46s 2025-09-29 06:20:32.485872 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.15s 2025-09-29 06:20:32.485883 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.00s 2025-09-29 06:20:32.485894 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.87s 2025-09-29 06:20:32.485913 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.78s 2025-09-29 06:20:32.485924 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.77s 2025-09-29 06:20:32.485935 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.65s 2025-09-29 06:20:32.485946 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.50s 2025-09-29 06:20:32.485957 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-09-29 06:20:32.485968 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-09-29 06:20:32.485979 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-09-29 06:20:32.485990 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task b5f8c16a-a30c-4158-802b-568c9b9e4d02 is in state SUCCESS 2025-09-29 06:20:32.486001 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:32.486071 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task 734e8ff1-c44a-442a-b23c-3af359e1cc0b is in state SUCCESS 2025-09-29 06:20:32.488466 | orchestrator | 2025-09-29 06:20:32.488509 | orchestrator | 2025-09-29 06:20:32.488519 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:20:32.488529 | orchestrator | 2025-09-29 06:20:32.488538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:20:32.488547 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-09-29 06:20:32.488559 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.488576 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.488592 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.488608 | orchestrator | 2025-09-29 06:20:32.488623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:20:32.488638 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.246) 0:00:00.482 ****** 2025-09-29 06:20:32.488653 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-29 06:20:32.488670 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-29 06:20:32.488685 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-29 06:20:32.488702 | orchestrator | 2025-09-29 06:20:32.488718 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-29 06:20:32.488733 | orchestrator | 2025-09-29 06:20:32.488750 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.488760 | orchestrator | Monday 29 September 2025 06:17:43 +0000 (0:00:00.341) 0:00:00.823 ****** 2025-09-29 06:20:32.488769 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:20:32.488793 | orchestrator | 2025-09-29 06:20:32.488802 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-29 06:20:32.488812 | orchestrator | Monday 29 September 2025 06:17:44 +0000 (0:00:00.476) 0:00:01.300 ****** 2025-09-29 06:20:32.488894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.488923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.488969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.488984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489061 | orchestrator | 2025-09-29 06:20:32.489071 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-29 06:20:32.489081 | orchestrator | Monday 29 September 2025 06:17:46 +0000 (0:00:01.702) 0:00:03.003 ****** 2025-09-29 06:20:32.489096 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-29 06:20:32.489106 | orchestrator | 2025-09-29 06:20:32.489116 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-29 06:20:32.489126 | orchestrator | Monday 29 September 2025 06:17:46 +0000 (0:00:00.753) 0:00:03.756 ****** 2025-09-29 06:20:32.489135 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.489145 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.489154 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.489164 | orchestrator | 2025-09-29 06:20:32.489180 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-29 06:20:32.489190 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.363) 0:00:04.119 ****** 2025-09-29 06:20:32.489199 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:20:32.489209 | orchestrator | 2025-09-29 06:20:32.489218 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.489228 | orchestrator | Monday 29 September 2025 06:17:47 +0000 (0:00:00.597) 0:00:04.716 ****** 2025-09-29 06:20:32.489237 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:20:32.489247 | orchestrator | 2025-09-29 06:20:32.489256 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-29 06:20:32.489265 | orchestrator | Monday 29 September 2025 06:17:48 +0000 (0:00:00.454) 0:00:05.170 ****** 2025-09-29 06:20:32.489276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.489287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.489371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.489404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.489476 | orchestrator | 2025-09-29 06:20:32.489486 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-29 06:20:32.489495 | orchestrator | Monday 29 September 2025 06:17:51 +0000 (0:00:03.188) 0:00:08.359 ****** 2025-09-29 06:20:32.489512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489550 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.489561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489611 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.489621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489652 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.489661 | orchestrator | 2025-09-29 06:20:32.489671 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-29 06:20:32.489681 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.644) 0:00:09.004 ****** 2025-09-29 06:20:32.489695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489758 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.489775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489824 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.489847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-29 06:20:32.489883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.489900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-29 06:20:32.489916 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.489932 | orchestrator | 2025-09-29 06:20:32.489942 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-29 06:20:32.489952 | orchestrator | Monday 29 September 2025 06:17:52 +0000 (0:00:00.643) 0:00:09.647 ****** 2025-09-29 06:20:32.489962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.489978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490159 | orchestrator | 2025-09-29 06:20:32.490169 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-29 06:20:32.490179 | orchestrator | Monday 29 September 2025 06:17:55 +0000 (0:00:02.968) 0:00:12.615 ****** 2025-09-29 06:20:32.490197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490315 | orchestrator | 2025-09-29 06:20:32.490365 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-29 06:20:32.490376 | orchestrator | Monday 29 September 2025 06:18:00 +0000 (0:00:04.982) 0:00:17.598 ****** 2025-09-29 06:20:32.490386 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.490396 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:20:32.490405 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:20:32.490414 | orchestrator | 2025-09-29 06:20:32.490424 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-29 06:20:32.490433 | orchestrator | Monday 29 September 2025 06:18:02 +0000 (0:00:01.533) 0:00:19.131 ****** 2025-09-29 06:20:32.490442 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.490452 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.490461 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.490470 | orchestrator | 2025-09-29 06:20:32.490489 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-29 06:20:32.490499 | orchestrator | Monday 29 September 2025 06:18:02 +0000 (0:00:00.660) 0:00:19.792 ****** 2025-09-29 06:20:32.490508 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.490517 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.490527 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.490536 | orchestrator | 2025-09-29 06:20:32.490546 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-29 06:20:32.490555 | orchestrator | Monday 29 September 2025 06:18:03 +0000 (0:00:00.289) 0:00:20.081 ****** 2025-09-29 06:20:32.490564 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.490574 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.490583 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.490592 | orchestrator | 2025-09-29 06:20:32.490601 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-29 06:20:32.490611 | orchestrator | Monday 29 September 2025 06:18:03 +0000 (0:00:00.522) 0:00:20.604 ****** 2025-09-29 06:20:32.490630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.490701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-29 06:20:32.490712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.490748 | orchestrator | 2025-09-29 06:20:32.490881 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.490905 | orchestrator | Monday 29 September 2025 06:18:06 +0000 (0:00:02.480) 0:00:23.085 ****** 2025-09-29 06:20:32.490922 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.490938 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.490955 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.490971 | orchestrator | 2025-09-29 06:20:32.490987 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-29 06:20:32.491003 | orchestrator | Monday 29 September 2025 06:18:06 +0000 (0:00:00.293) 0:00:23.378 ****** 2025-09-29 06:20:32.491018 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-29 06:20:32.491036 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-29 06:20:32.491052 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-29 06:20:32.491066 | orchestrator | 2025-09-29 06:20:32.491089 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-29 06:20:32.491099 | orchestrator | Monday 29 September 2025 06:18:08 +0000 (0:00:01.586) 0:00:24.965 ****** 2025-09-29 06:20:32.491109 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:20:32.491118 | orchestrator | 2025-09-29 06:20:32.491127 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-29 06:20:32.491137 | orchestrator | Monday 29 September 2025 06:18:09 +0000 (0:00:00.916) 0:00:25.882 ****** 2025-09-29 06:20:32.491146 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.491155 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.491165 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.491174 | orchestrator | 2025-09-29 06:20:32.491183 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-29 06:20:32.491193 | orchestrator | Monday 29 September 2025 06:18:09 +0000 (0:00:00.771) 0:00:26.654 ****** 2025-09-29 06:20:32.491202 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:20:32.491211 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-29 06:20:32.491221 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-29 06:20:32.491230 | orchestrator | 2025-09-29 06:20:32.491239 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-29 06:20:32.491249 | orchestrator | Monday 29 September 2025 06:18:10 +0000 (0:00:01.139) 0:00:27.793 ****** 2025-09-29 06:20:32.491266 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.491277 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.491286 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.491296 | orchestrator | 2025-09-29 06:20:32.491305 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-29 06:20:32.491393 | orchestrator | Monday 29 September 2025 06:18:11 +0000 (0:00:00.314) 0:00:28.108 ****** 2025-09-29 06:20:32.491418 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-29 06:20:32.491436 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-29 06:20:32.491453 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-29 06:20:32.491469 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-29 06:20:32.491486 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-29 06:20:32.491496 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-29 06:20:32.491632 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-29 06:20:32.491648 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-29 06:20:32.491659 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-29 06:20:32.491670 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-29 06:20:32.491681 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-29 06:20:32.491692 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-29 06:20:32.491702 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-29 06:20:32.491712 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-29 06:20:32.491721 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-29 06:20:32.491731 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:20:32.491740 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:20:32.491750 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:20:32.491759 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:20:32.491769 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:20:32.491778 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:20:32.491787 | orchestrator | 2025-09-29 06:20:32.491797 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-29 06:20:32.491807 | orchestrator | Monday 29 September 2025 06:18:20 +0000 (0:00:08.890) 0:00:36.999 ****** 2025-09-29 06:20:32.491816 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:20:32.491825 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:20:32.491835 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:20:32.491844 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:20:32.491853 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:20:32.491863 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:20:32.491872 | orchestrator | 2025-09-29 06:20:32.491881 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-29 06:20:32.491895 | orchestrator | Monday 29 September 2025 06:18:22 +0000 (0:00:02.626) 0:00:39.625 ****** 2025-09-29 06:20:32.491913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.491931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.491941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-29 06:20:32.491950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.491964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.491985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-29 06:20:32.492007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.492021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.492043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-29 06:20:32.492058 | orchestrator | 2025-09-29 06:20:32.492072 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.492086 | orchestrator | Monday 29 September 2025 06:18:25 +0000 (0:00:02.318) 0:00:41.943 ****** 2025-09-29 06:20:32.492100 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.492114 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.492127 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.492139 | orchestrator | 2025-09-29 06:20:32.492151 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-29 06:20:32.492164 | orchestrator | Monday 29 September 2025 06:18:25 +0000 (0:00:00.272) 0:00:42.216 ****** 2025-09-29 06:20:32.492177 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492197 | orchestrator | 2025-09-29 06:20:32.492210 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-29 06:20:32.492223 | orchestrator | Monday 29 September 2025 06:18:27 +0000 (0:00:02.457) 0:00:44.674 ****** 2025-09-29 06:20:32.492237 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492250 | orchestrator | 2025-09-29 06:20:32.492263 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-29 06:20:32.492276 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:02.354) 0:00:47.028 ****** 2025-09-29 06:20:32.492293 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.492301 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.492309 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.492317 | orchestrator | 2025-09-29 06:20:32.492351 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-29 06:20:32.492360 | orchestrator | Monday 29 September 2025 06:18:30 +0000 (0:00:00.790) 0:00:47.819 ****** 2025-09-29 06:20:32.492368 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.492376 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.492383 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.492391 | orchestrator | 2025-09-29 06:20:32.492399 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-29 06:20:32.492407 | orchestrator | Monday 29 September 2025 06:18:31 +0000 (0:00:00.465) 0:00:48.284 ****** 2025-09-29 06:20:32.492414 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.492422 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.492435 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.492443 | orchestrator | 2025-09-29 06:20:32.492451 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-29 06:20:32.492459 | orchestrator | Monday 29 September 2025 06:18:31 +0000 (0:00:00.354) 0:00:48.639 ****** 2025-09-29 06:20:32.492466 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492474 | orchestrator | 2025-09-29 06:20:32.492482 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-29 06:20:32.492490 | orchestrator | Monday 29 September 2025 06:18:47 +0000 (0:00:15.900) 0:01:04.540 ****** 2025-09-29 06:20:32.492497 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492505 | orchestrator | 2025-09-29 06:20:32.492513 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-29 06:20:32.492520 | orchestrator | Monday 29 September 2025 06:18:59 +0000 (0:00:11.558) 0:01:16.098 ****** 2025-09-29 06:20:32.492528 | orchestrator | 2025-09-29 06:20:32.492536 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-29 06:20:32.492543 | orchestrator | Monday 29 September 2025 06:18:59 +0000 (0:00:00.063) 0:01:16.162 ****** 2025-09-29 06:20:32.492551 | orchestrator | 2025-09-29 06:20:32.492559 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-29 06:20:32.492566 | orchestrator | Monday 29 September 2025 06:18:59 +0000 (0:00:00.060) 0:01:16.222 ****** 2025-09-29 06:20:32.492574 | orchestrator | 2025-09-29 06:20:32.492589 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-29 06:20:32.492598 | orchestrator | Monday 29 September 2025 06:18:59 +0000 (0:00:00.068) 0:01:16.291 ****** 2025-09-29 06:20:32.492606 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492613 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:20:32.492621 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:20:32.492629 | orchestrator | 2025-09-29 06:20:32.492636 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-29 06:20:32.492644 | orchestrator | Monday 29 September 2025 06:19:15 +0000 (0:00:15.738) 0:01:32.029 ****** 2025-09-29 06:20:32.492652 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:20:32.492660 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492667 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:20:32.492675 | orchestrator | 2025-09-29 06:20:32.492682 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-29 06:20:32.492690 | orchestrator | Monday 29 September 2025 06:19:25 +0000 (0:00:09.903) 0:01:41.933 ****** 2025-09-29 06:20:32.492698 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492705 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:20:32.492713 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:20:32.492721 | orchestrator | 2025-09-29 06:20:32.492728 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.492736 | orchestrator | Monday 29 September 2025 06:19:37 +0000 (0:00:12.242) 0:01:54.176 ****** 2025-09-29 06:20:32.492751 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:20:32.492759 | orchestrator | 2025-09-29 06:20:32.492766 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-29 06:20:32.492774 | orchestrator | Monday 29 September 2025 06:19:37 +0000 (0:00:00.595) 0:01:54.772 ****** 2025-09-29 06:20:32.492782 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:20:32.492789 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:20:32.492797 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.492804 | orchestrator | 2025-09-29 06:20:32.492812 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-29 06:20:32.492820 | orchestrator | Monday 29 September 2025 06:19:38 +0000 (0:00:00.702) 0:01:55.475 ****** 2025-09-29 06:20:32.492828 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:20:32.492835 | orchestrator | 2025-09-29 06:20:32.492843 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-29 06:20:32.492851 | orchestrator | Monday 29 September 2025 06:19:40 +0000 (0:00:01.703) 0:01:57.179 ****** 2025-09-29 06:20:32.492858 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-29 06:20:32.492866 | orchestrator | 2025-09-29 06:20:32.492874 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-29 06:20:32.492881 | orchestrator | Monday 29 September 2025 06:19:52 +0000 (0:00:12.304) 0:02:09.483 ****** 2025-09-29 06:20:32.492889 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-29 06:20:32.492896 | orchestrator | 2025-09-29 06:20:32.492904 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-29 06:20:32.492912 | orchestrator | Monday 29 September 2025 06:20:18 +0000 (0:00:25.380) 0:02:34.863 ****** 2025-09-29 06:20:32.492920 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-29 06:20:32.492928 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-29 06:20:32.492935 | orchestrator | 2025-09-29 06:20:32.492943 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-29 06:20:32.492950 | orchestrator | Monday 29 September 2025 06:20:25 +0000 (0:00:07.254) 0:02:42.118 ****** 2025-09-29 06:20:32.492958 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.492966 | orchestrator | 2025-09-29 06:20:32.492974 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-29 06:20:32.492981 | orchestrator | Monday 29 September 2025 06:20:25 +0000 (0:00:00.133) 0:02:42.251 ****** 2025-09-29 06:20:32.492989 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.492997 | orchestrator | 2025-09-29 06:20:32.493004 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-29 06:20:32.493012 | orchestrator | Monday 29 September 2025 06:20:25 +0000 (0:00:00.128) 0:02:42.379 ****** 2025-09-29 06:20:32.493019 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.493027 | orchestrator | 2025-09-29 06:20:32.493035 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-29 06:20:32.493043 | orchestrator | Monday 29 September 2025 06:20:25 +0000 (0:00:00.118) 0:02:42.498 ****** 2025-09-29 06:20:32.493050 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.493058 | orchestrator | 2025-09-29 06:20:32.493069 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-29 06:20:32.493077 | orchestrator | Monday 29 September 2025 06:20:26 +0000 (0:00:00.524) 0:02:43.022 ****** 2025-09-29 06:20:32.493085 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:20:32.493093 | orchestrator | 2025-09-29 06:20:32.493106 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-29 06:20:32.493119 | orchestrator | Monday 29 September 2025 06:20:29 +0000 (0:00:03.348) 0:02:46.371 ****** 2025-09-29 06:20:32.493133 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:20:32.493145 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:20:32.493158 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:20:32.493187 | orchestrator | 2025-09-29 06:20:32.493202 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:20:32.493217 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-29 06:20:32.493231 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-29 06:20:32.493251 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-29 06:20:32.493266 | orchestrator | 2025-09-29 06:20:32.493274 | orchestrator | 2025-09-29 06:20:32.493282 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:20:32.493290 | orchestrator | Monday 29 September 2025 06:20:29 +0000 (0:00:00.424) 0:02:46.795 ****** 2025-09-29 06:20:32.493298 | orchestrator | =============================================================================== 2025-09-29 06:20:32.493305 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.38s 2025-09-29 06:20:32.493313 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.90s 2025-09-29 06:20:32.493321 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.74s 2025-09-29 06:20:32.493375 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.30s 2025-09-29 06:20:32.493383 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.24s 2025-09-29 06:20:32.493391 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.56s 2025-09-29 06:20:32.493398 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.90s 2025-09-29 06:20:32.493406 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.89s 2025-09-29 06:20:32.493414 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.25s 2025-09-29 06:20:32.493422 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.98s 2025-09-29 06:20:32.493429 | orchestrator | keystone : Creating default user role ----------------------------------- 3.35s 2025-09-29 06:20:32.493437 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.19s 2025-09-29 06:20:32.493445 | orchestrator | keystone : Copying over config.json files for services ------------------ 2.97s 2025-09-29 06:20:32.493452 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.63s 2025-09-29 06:20:32.493460 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.48s 2025-09-29 06:20:32.493468 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.46s 2025-09-29 06:20:32.493476 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2025-09-29 06:20:32.493483 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.32s 2025-09-29 06:20:32.493491 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.70s 2025-09-29 06:20:32.493499 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.70s 2025-09-29 06:20:32.493507 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:32.493515 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:32.493522 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:32.493530 | orchestrator | 2025-09-29 06:20:32 | INFO  | Task 01e27077-f7c4-4b51-8f04-7b750ddd88d2 is in state STARTED 2025-09-29 06:20:32.493538 | orchestrator | 2025-09-29 06:20:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:35.523926 | orchestrator | 2025-09-29 06:20:35 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:35.524150 | orchestrator | 2025-09-29 06:20:35 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:35.524707 | orchestrator | 2025-09-29 06:20:35 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:35.525258 | orchestrator | 2025-09-29 06:20:35 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:35.526003 | orchestrator | 2025-09-29 06:20:35 | INFO  | Task 01e27077-f7c4-4b51-8f04-7b750ddd88d2 is in state STARTED 2025-09-29 06:20:35.526131 | orchestrator | 2025-09-29 06:20:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:38.562795 | orchestrator | 2025-09-29 06:20:38 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:38.563279 | orchestrator | 2025-09-29 06:20:38 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:38.563871 | orchestrator | 2025-09-29 06:20:38 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:38.564670 | orchestrator | 2025-09-29 06:20:38 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:38.565315 | orchestrator | 2025-09-29 06:20:38 | INFO  | Task 01e27077-f7c4-4b51-8f04-7b750ddd88d2 is in state SUCCESS 2025-09-29 06:20:38.565381 | orchestrator | 2025-09-29 06:20:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:41.601925 | orchestrator | 2025-09-29 06:20:41 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:41.602911 | orchestrator | 2025-09-29 06:20:41 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:41.603822 | orchestrator | 2025-09-29 06:20:41 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:41.607094 | orchestrator | 2025-09-29 06:20:41 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:41.607698 | orchestrator | 2025-09-29 06:20:41 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:41.609162 | orchestrator | 2025-09-29 06:20:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:44.640676 | orchestrator | 2025-09-29 06:20:44 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:44.640780 | orchestrator | 2025-09-29 06:20:44 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:44.641522 | orchestrator | 2025-09-29 06:20:44 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:44.641942 | orchestrator | 2025-09-29 06:20:44 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:44.642780 | orchestrator | 2025-09-29 06:20:44 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:44.642809 | orchestrator | 2025-09-29 06:20:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:47.684431 | orchestrator | 2025-09-29 06:20:47 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:47.685141 | orchestrator | 2025-09-29 06:20:47 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:47.687322 | orchestrator | 2025-09-29 06:20:47 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:47.687947 | orchestrator | 2025-09-29 06:20:47 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:47.691968 | orchestrator | 2025-09-29 06:20:47 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:47.692049 | orchestrator | 2025-09-29 06:20:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:50.731776 | orchestrator | 2025-09-29 06:20:50 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:50.731989 | orchestrator | 2025-09-29 06:20:50 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:50.732756 | orchestrator | 2025-09-29 06:20:50 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:50.733660 | orchestrator | 2025-09-29 06:20:50 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:50.734494 | orchestrator | 2025-09-29 06:20:50 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:50.734519 | orchestrator | 2025-09-29 06:20:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:53.810122 | orchestrator | 2025-09-29 06:20:53 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:53.810246 | orchestrator | 2025-09-29 06:20:53 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:53.810269 | orchestrator | 2025-09-29 06:20:53 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:53.810286 | orchestrator | 2025-09-29 06:20:53 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:53.810325 | orchestrator | 2025-09-29 06:20:53 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:53.810414 | orchestrator | 2025-09-29 06:20:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:56.801757 | orchestrator | 2025-09-29 06:20:56 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:56.801861 | orchestrator | 2025-09-29 06:20:56 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:56.801877 | orchestrator | 2025-09-29 06:20:56 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:56.802879 | orchestrator | 2025-09-29 06:20:56 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:56.803155 | orchestrator | 2025-09-29 06:20:56 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:56.803182 | orchestrator | 2025-09-29 06:20:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:20:59.852102 | orchestrator | 2025-09-29 06:20:59 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:20:59.854544 | orchestrator | 2025-09-29 06:20:59 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:20:59.856264 | orchestrator | 2025-09-29 06:20:59 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:20:59.858706 | orchestrator | 2025-09-29 06:20:59 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:20:59.862099 | orchestrator | 2025-09-29 06:20:59 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:20:59.862191 | orchestrator | 2025-09-29 06:20:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:02.900670 | orchestrator | 2025-09-29 06:21:02 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:02.900754 | orchestrator | 2025-09-29 06:21:02 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:02.900764 | orchestrator | 2025-09-29 06:21:02 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:02.902155 | orchestrator | 2025-09-29 06:21:02 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:02.902173 | orchestrator | 2025-09-29 06:21:02 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:02.902181 | orchestrator | 2025-09-29 06:21:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:05.935460 | orchestrator | 2025-09-29 06:21:05 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:05.935563 | orchestrator | 2025-09-29 06:21:05 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:05.935996 | orchestrator | 2025-09-29 06:21:05 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:05.936527 | orchestrator | 2025-09-29 06:21:05 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:05.937432 | orchestrator | 2025-09-29 06:21:05 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:05.937461 | orchestrator | 2025-09-29 06:21:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:09.637637 | orchestrator | 2025-09-29 06:21:08 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:09.637736 | orchestrator | 2025-09-29 06:21:08 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:09.637752 | orchestrator | 2025-09-29 06:21:08 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:09.637764 | orchestrator | 2025-09-29 06:21:08 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:09.637775 | orchestrator | 2025-09-29 06:21:08 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:09.637787 | orchestrator | 2025-09-29 06:21:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:11.989827 | orchestrator | 2025-09-29 06:21:11 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:11.990589 | orchestrator | 2025-09-29 06:21:11 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:11.990666 | orchestrator | 2025-09-29 06:21:11 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:11.991987 | orchestrator | 2025-09-29 06:21:11 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:11.992528 | orchestrator | 2025-09-29 06:21:11 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:11.992553 | orchestrator | 2025-09-29 06:21:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:15.025670 | orchestrator | 2025-09-29 06:21:15 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:15.025752 | orchestrator | 2025-09-29 06:21:15 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:15.025762 | orchestrator | 2025-09-29 06:21:15 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:15.025770 | orchestrator | 2025-09-29 06:21:15 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:15.025777 | orchestrator | 2025-09-29 06:21:15 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:15.025784 | orchestrator | 2025-09-29 06:21:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:18.055067 | orchestrator | 2025-09-29 06:21:18 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:18.055258 | orchestrator | 2025-09-29 06:21:18 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:18.055907 | orchestrator | 2025-09-29 06:21:18 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:18.056440 | orchestrator | 2025-09-29 06:21:18 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:18.057167 | orchestrator | 2025-09-29 06:21:18 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:18.057189 | orchestrator | 2025-09-29 06:21:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:21.097842 | orchestrator | 2025-09-29 06:21:21 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:21.099003 | orchestrator | 2025-09-29 06:21:21 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:21.099649 | orchestrator | 2025-09-29 06:21:21 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:21.100649 | orchestrator | 2025-09-29 06:21:21 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:21.102077 | orchestrator | 2025-09-29 06:21:21 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:21.102269 | orchestrator | 2025-09-29 06:21:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:24.132114 | orchestrator | 2025-09-29 06:21:24 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:24.132328 | orchestrator | 2025-09-29 06:21:24 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:24.132967 | orchestrator | 2025-09-29 06:21:24 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:24.133570 | orchestrator | 2025-09-29 06:21:24 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:24.134248 | orchestrator | 2025-09-29 06:21:24 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:24.134310 | orchestrator | 2025-09-29 06:21:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:27.268583 | orchestrator | 2025-09-29 06:21:27 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:27.268689 | orchestrator | 2025-09-29 06:21:27 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:27.268705 | orchestrator | 2025-09-29 06:21:27 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:27.268724 | orchestrator | 2025-09-29 06:21:27 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:27.268743 | orchestrator | 2025-09-29 06:21:27 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:27.268761 | orchestrator | 2025-09-29 06:21:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:30.197964 | orchestrator | 2025-09-29 06:21:30 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:30.198136 | orchestrator | 2025-09-29 06:21:30 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:30.198892 | orchestrator | 2025-09-29 06:21:30 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:30.199888 | orchestrator | 2025-09-29 06:21:30 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:30.200592 | orchestrator | 2025-09-29 06:21:30 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:30.200688 | orchestrator | 2025-09-29 06:21:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:33.244649 | orchestrator | 2025-09-29 06:21:33 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:33.245629 | orchestrator | 2025-09-29 06:21:33 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:33.246124 | orchestrator | 2025-09-29 06:21:33 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:33.246948 | orchestrator | 2025-09-29 06:21:33 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:33.247458 | orchestrator | 2025-09-29 06:21:33 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:33.247490 | orchestrator | 2025-09-29 06:21:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:36.277443 | orchestrator | 2025-09-29 06:21:36 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:36.277549 | orchestrator | 2025-09-29 06:21:36 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:36.278075 | orchestrator | 2025-09-29 06:21:36 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:36.278623 | orchestrator | 2025-09-29 06:21:36 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:36.279250 | orchestrator | 2025-09-29 06:21:36 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:36.279291 | orchestrator | 2025-09-29 06:21:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:39.301865 | orchestrator | 2025-09-29 06:21:39 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:39.303668 | orchestrator | 2025-09-29 06:21:39 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:39.305085 | orchestrator | 2025-09-29 06:21:39 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:39.305545 | orchestrator | 2025-09-29 06:21:39 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:39.306237 | orchestrator | 2025-09-29 06:21:39 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:39.306284 | orchestrator | 2025-09-29 06:21:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:42.328126 | orchestrator | 2025-09-29 06:21:42 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:42.328228 | orchestrator | 2025-09-29 06:21:42 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:42.328777 | orchestrator | 2025-09-29 06:21:42 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:42.329451 | orchestrator | 2025-09-29 06:21:42 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:42.329890 | orchestrator | 2025-09-29 06:21:42 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:42.330080 | orchestrator | 2025-09-29 06:21:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:45.360023 | orchestrator | 2025-09-29 06:21:45 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:45.360190 | orchestrator | 2025-09-29 06:21:45 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:45.360927 | orchestrator | 2025-09-29 06:21:45 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:45.361276 | orchestrator | 2025-09-29 06:21:45 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:45.362322 | orchestrator | 2025-09-29 06:21:45 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:45.362370 | orchestrator | 2025-09-29 06:21:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:48.390090 | orchestrator | 2025-09-29 06:21:48 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:48.390741 | orchestrator | 2025-09-29 06:21:48 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:48.392708 | orchestrator | 2025-09-29 06:21:48 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:48.394635 | orchestrator | 2025-09-29 06:21:48 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:48.396688 | orchestrator | 2025-09-29 06:21:48 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:48.396827 | orchestrator | 2025-09-29 06:21:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:51.425357 | orchestrator | 2025-09-29 06:21:51 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:51.425789 | orchestrator | 2025-09-29 06:21:51 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:51.426641 | orchestrator | 2025-09-29 06:21:51 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:51.427618 | orchestrator | 2025-09-29 06:21:51 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:51.428220 | orchestrator | 2025-09-29 06:21:51 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:51.428460 | orchestrator | 2025-09-29 06:21:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:54.470542 | orchestrator | 2025-09-29 06:21:54 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:54.471309 | orchestrator | 2025-09-29 06:21:54 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:54.472264 | orchestrator | 2025-09-29 06:21:54 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:54.472851 | orchestrator | 2025-09-29 06:21:54 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:54.473893 | orchestrator | 2025-09-29 06:21:54 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:54.473928 | orchestrator | 2025-09-29 06:21:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:21:57.497315 | orchestrator | 2025-09-29 06:21:57 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:21:57.497577 | orchestrator | 2025-09-29 06:21:57 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:21:57.498500 | orchestrator | 2025-09-29 06:21:57 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:21:57.499928 | orchestrator | 2025-09-29 06:21:57 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:21:57.500808 | orchestrator | 2025-09-29 06:21:57 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state STARTED 2025-09-29 06:21:57.500880 | orchestrator | 2025-09-29 06:21:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:00.541701 | orchestrator | 2025-09-29 06:22:00 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:00.542202 | orchestrator | 2025-09-29 06:22:00 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:00.543095 | orchestrator | 2025-09-29 06:22:00 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:00.543970 | orchestrator | 2025-09-29 06:22:00 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:00.544205 | orchestrator | 2025-09-29 06:22:00 | INFO  | Task 24238541-e497-40e8-9cfb-513c57ca3350 is in state SUCCESS 2025-09-29 06:22:00.544489 | orchestrator | 2025-09-29 06:22:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:03.572724 | orchestrator | 2025-09-29 06:22:03 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:03.573052 | orchestrator | 2025-09-29 06:22:03 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:03.573864 | orchestrator | 2025-09-29 06:22:03 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:03.574667 | orchestrator | 2025-09-29 06:22:03 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:03.576002 | orchestrator | 2025-09-29 06:22:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:06.601945 | orchestrator | 2025-09-29 06:22:06 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:06.603709 | orchestrator | 2025-09-29 06:22:06 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:06.605251 | orchestrator | 2025-09-29 06:22:06 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:06.606655 | orchestrator | 2025-09-29 06:22:06 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:06.606916 | orchestrator | 2025-09-29 06:22:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:09.630697 | orchestrator | 2025-09-29 06:22:09 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:09.631174 | orchestrator | 2025-09-29 06:22:09 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:09.632027 | orchestrator | 2025-09-29 06:22:09 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:09.632983 | orchestrator | 2025-09-29 06:22:09 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:09.633013 | orchestrator | 2025-09-29 06:22:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:12.656824 | orchestrator | 2025-09-29 06:22:12 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:12.658640 | orchestrator | 2025-09-29 06:22:12 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:12.659569 | orchestrator | 2025-09-29 06:22:12 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:12.660800 | orchestrator | 2025-09-29 06:22:12 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:12.661034 | orchestrator | 2025-09-29 06:22:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:15.690220 | orchestrator | 2025-09-29 06:22:15 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:15.690574 | orchestrator | 2025-09-29 06:22:15 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:15.691123 | orchestrator | 2025-09-29 06:22:15 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:15.691955 | orchestrator | 2025-09-29 06:22:15 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:15.692031 | orchestrator | 2025-09-29 06:22:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:18.717465 | orchestrator | 2025-09-29 06:22:18 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:18.718301 | orchestrator | 2025-09-29 06:22:18 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:18.718517 | orchestrator | 2025-09-29 06:22:18 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:18.719428 | orchestrator | 2025-09-29 06:22:18 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:18.719458 | orchestrator | 2025-09-29 06:22:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:21.741875 | orchestrator | 2025-09-29 06:22:21 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:21.742249 | orchestrator | 2025-09-29 06:22:21 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:21.743202 | orchestrator | 2025-09-29 06:22:21 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:21.748221 | orchestrator | 2025-09-29 06:22:21 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:21.748295 | orchestrator | 2025-09-29 06:22:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:24.774965 | orchestrator | 2025-09-29 06:22:24 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:24.775305 | orchestrator | 2025-09-29 06:22:24 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:24.776567 | orchestrator | 2025-09-29 06:22:24 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:24.778465 | orchestrator | 2025-09-29 06:22:24 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:24.779530 | orchestrator | 2025-09-29 06:22:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:27.803392 | orchestrator | 2025-09-29 06:22:27 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:27.803818 | orchestrator | 2025-09-29 06:22:27 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:27.804867 | orchestrator | 2025-09-29 06:22:27 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:27.805336 | orchestrator | 2025-09-29 06:22:27 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:27.805381 | orchestrator | 2025-09-29 06:22:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:30.830299 | orchestrator | 2025-09-29 06:22:30 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:30.830595 | orchestrator | 2025-09-29 06:22:30 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:30.831102 | orchestrator | 2025-09-29 06:22:30 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:30.831932 | orchestrator | 2025-09-29 06:22:30 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:30.831974 | orchestrator | 2025-09-29 06:22:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:33.921129 | orchestrator | 2025-09-29 06:22:33 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:33.921771 | orchestrator | 2025-09-29 06:22:33 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:33.922996 | orchestrator | 2025-09-29 06:22:33 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:33.924497 | orchestrator | 2025-09-29 06:22:33 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:33.924530 | orchestrator | 2025-09-29 06:22:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:36.961977 | orchestrator | 2025-09-29 06:22:36 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:36.962559 | orchestrator | 2025-09-29 06:22:36 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:36.963583 | orchestrator | 2025-09-29 06:22:36 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:36.964366 | orchestrator | 2025-09-29 06:22:36 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:36.964571 | orchestrator | 2025-09-29 06:22:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:40.003308 | orchestrator | 2025-09-29 06:22:40 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:40.003575 | orchestrator | 2025-09-29 06:22:40 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:40.004455 | orchestrator | 2025-09-29 06:22:40 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:40.005390 | orchestrator | 2025-09-29 06:22:40 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:40.005506 | orchestrator | 2025-09-29 06:22:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:43.039340 | orchestrator | 2025-09-29 06:22:43 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:43.039933 | orchestrator | 2025-09-29 06:22:43 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:43.041177 | orchestrator | 2025-09-29 06:22:43 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:43.042217 | orchestrator | 2025-09-29 06:22:43 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:43.042310 | orchestrator | 2025-09-29 06:22:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:46.114595 | orchestrator | 2025-09-29 06:22:46 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:46.114838 | orchestrator | 2025-09-29 06:22:46 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:46.116440 | orchestrator | 2025-09-29 06:22:46 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:46.117166 | orchestrator | 2025-09-29 06:22:46 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state STARTED 2025-09-29 06:22:46.117192 | orchestrator | 2025-09-29 06:22:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:49.144914 | orchestrator | 2025-09-29 06:22:49 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:22:49.145007 | orchestrator | 2025-09-29 06:22:49 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:49.145637 | orchestrator | 2025-09-29 06:22:49 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:49.146295 | orchestrator | 2025-09-29 06:22:49 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:49.147689 | orchestrator | 2025-09-29 06:22:49 | INFO  | Task 5f735069-e64d-455b-9419-aaf47f31a8c5 is in state SUCCESS 2025-09-29 06:22:49.147799 | orchestrator | 2025-09-29 06:22:49.147817 | orchestrator | 2025-09-29 06:22:49.147825 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:22:49.147857 | orchestrator | 2025-09-29 06:22:49.147866 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:22:49.147874 | orchestrator | Monday 29 September 2025 06:20:35 +0000 (0:00:00.164) 0:00:00.164 ****** 2025-09-29 06:22:49.147882 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:22:49.147892 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:22:49.147900 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:22:49.147907 | orchestrator | 2025-09-29 06:22:49.147915 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:22:49.147922 | orchestrator | Monday 29 September 2025 06:20:35 +0000 (0:00:00.342) 0:00:00.507 ****** 2025-09-29 06:22:49.147930 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-29 06:22:49.147939 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-29 06:22:49.147947 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-29 06:22:49.147955 | orchestrator | 2025-09-29 06:22:49.147964 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-29 06:22:49.147972 | orchestrator | 2025-09-29 06:22:49.147980 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-29 06:22:49.147988 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.933) 0:00:01.440 ****** 2025-09-29 06:22:49.147995 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:22:49.148003 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:22:49.148011 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:22:49.148019 | orchestrator | 2025-09-29 06:22:49.148066 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:22:49.148097 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.148110 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.148119 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.148158 | orchestrator | 2025-09-29 06:22:49.148166 | orchestrator | 2025-09-29 06:22:49.148175 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:22:49.148183 | orchestrator | Monday 29 September 2025 06:20:37 +0000 (0:00:00.713) 0:00:02.154 ****** 2025-09-29 06:22:49.148193 | orchestrator | =============================================================================== 2025-09-29 06:22:49.148224 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-09-29 06:22:49.148233 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.71s 2025-09-29 06:22:49.148241 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-09-29 06:22:49.148248 | orchestrator | 2025-09-29 06:22:49.148318 | orchestrator | 2025-09-29 06:22:49.148326 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-29 06:22:49.148333 | orchestrator | 2025-09-29 06:22:49.148340 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-29 06:22:49.148348 | orchestrator | Monday 29 September 2025 06:20:34 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-29 06:22:49.148356 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148366 | orchestrator | 2025-09-29 06:22:49.148374 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-29 06:22:49.148382 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:02.111) 0:00:02.381 ****** 2025-09-29 06:22:49.148389 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148396 | orchestrator | 2025-09-29 06:22:49.148404 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-29 06:22:49.148482 | orchestrator | Monday 29 September 2025 06:20:37 +0000 (0:00:00.865) 0:00:03.246 ****** 2025-09-29 06:22:49.148493 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148501 | orchestrator | 2025-09-29 06:22:49.148521 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-29 06:22:49.148529 | orchestrator | Monday 29 September 2025 06:20:38 +0000 (0:00:01.013) 0:00:04.259 ****** 2025-09-29 06:22:49.148537 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148544 | orchestrator | 2025-09-29 06:22:49.148583 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-29 06:22:49.148693 | orchestrator | Monday 29 September 2025 06:20:39 +0000 (0:00:01.280) 0:00:05.540 ****** 2025-09-29 06:22:49.148704 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148733 | orchestrator | 2025-09-29 06:22:49.148743 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-29 06:22:49.148752 | orchestrator | Monday 29 September 2025 06:20:41 +0000 (0:00:01.413) 0:00:06.953 ****** 2025-09-29 06:22:49.148761 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148769 | orchestrator | 2025-09-29 06:22:49.148802 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-29 06:22:49.148814 | orchestrator | Monday 29 September 2025 06:20:42 +0000 (0:00:00.971) 0:00:07.925 ****** 2025-09-29 06:22:49.148823 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148831 | orchestrator | 2025-09-29 06:22:49.148870 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-29 06:22:49.148904 | orchestrator | Monday 29 September 2025 06:20:43 +0000 (0:00:01.228) 0:00:09.153 ****** 2025-09-29 06:22:49.148913 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148921 | orchestrator | 2025-09-29 06:22:49.148929 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-29 06:22:49.148938 | orchestrator | Monday 29 September 2025 06:20:44 +0000 (0:00:00.999) 0:00:10.153 ****** 2025-09-29 06:22:49.148946 | orchestrator | changed: [testbed-manager] 2025-09-29 06:22:49.148954 | orchestrator | 2025-09-29 06:22:49.149005 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-29 06:22:49.149018 | orchestrator | Monday 29 September 2025 06:21:35 +0000 (0:00:51.074) 0:01:01.228 ****** 2025-09-29 06:22:49.149076 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:22:49.149109 | orchestrator | 2025-09-29 06:22:49.149118 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-29 06:22:49.149127 | orchestrator | 2025-09-29 06:22:49.149136 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-29 06:22:49.149145 | orchestrator | Monday 29 September 2025 06:21:35 +0000 (0:00:00.117) 0:01:01.345 ****** 2025-09-29 06:22:49.149153 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.149160 | orchestrator | 2025-09-29 06:22:49.149169 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-29 06:22:49.149177 | orchestrator | 2025-09-29 06:22:49.149185 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-29 06:22:49.149194 | orchestrator | Monday 29 September 2025 06:21:47 +0000 (0:00:11.517) 0:01:12.863 ****** 2025-09-29 06:22:49.149203 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:22:49.149211 | orchestrator | 2025-09-29 06:22:49.149220 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-29 06:22:49.149228 | orchestrator | 2025-09-29 06:22:49.149237 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-29 06:22:49.149245 | orchestrator | Monday 29 September 2025 06:21:48 +0000 (0:00:01.202) 0:01:14.066 ****** 2025-09-29 06:22:49.149254 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:22:49.149262 | orchestrator | 2025-09-29 06:22:49.149271 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:22:49.149280 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-29 06:22:49.149291 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.149300 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.149319 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:22:49.149328 | orchestrator | 2025-09-29 06:22:49.149337 | orchestrator | 2025-09-29 06:22:49.149345 | orchestrator | 2025-09-29 06:22:49.149354 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:22:49.149363 | orchestrator | Monday 29 September 2025 06:21:59 +0000 (0:00:11.194) 0:01:25.261 ****** 2025-09-29 06:22:49.149371 | orchestrator | =============================================================================== 2025-09-29 06:22:49.149379 | orchestrator | Create admin user ------------------------------------------------------ 51.07s 2025-09-29 06:22:49.149389 | orchestrator | Restart ceph manager service ------------------------------------------- 23.92s 2025-09-29 06:22:49.149396 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.11s 2025-09-29 06:22:49.149404 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.41s 2025-09-29 06:22:49.149432 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.28s 2025-09-29 06:22:49.149441 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.23s 2025-09-29 06:22:49.149450 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.01s 2025-09-29 06:22:49.149458 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.00s 2025-09-29 06:22:49.149466 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.97s 2025-09-29 06:22:49.149474 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.87s 2025-09-29 06:22:49.149482 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2025-09-29 06:22:49.149490 | orchestrator | 2025-09-29 06:22:49.149914 | orchestrator | 2025-09-29 06:22:49.149939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:22:49.149948 | orchestrator | 2025-09-29 06:22:49.149956 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:22:49.149964 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.316) 0:00:00.316 ****** 2025-09-29 06:22:49.149973 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:22:49.149983 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:22:49.149991 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:22:49.150000 | orchestrator | 2025-09-29 06:22:49.150008 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:22:49.150061 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.280) 0:00:00.596 ****** 2025-09-29 06:22:49.150073 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-29 06:22:49.150082 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-29 06:22:49.150090 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-29 06:22:49.150098 | orchestrator | 2025-09-29 06:22:49.150107 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-29 06:22:49.150115 | orchestrator | 2025-09-29 06:22:49.150123 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-29 06:22:49.150131 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.489) 0:00:01.086 ****** 2025-09-29 06:22:49.150139 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:22:49.150149 | orchestrator | 2025-09-29 06:22:49.150158 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-29 06:22:49.150166 | orchestrator | Monday 29 September 2025 06:20:37 +0000 (0:00:00.603) 0:00:01.689 ****** 2025-09-29 06:22:49.150175 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-29 06:22:49.150183 | orchestrator | 2025-09-29 06:22:49.150199 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-29 06:22:49.150218 | orchestrator | Monday 29 September 2025 06:20:41 +0000 (0:00:04.236) 0:00:05.925 ****** 2025-09-29 06:22:49.150227 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-29 06:22:49.150235 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-29 06:22:49.150243 | orchestrator | 2025-09-29 06:22:49.150251 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-29 06:22:49.150260 | orchestrator | Monday 29 September 2025 06:20:48 +0000 (0:00:07.161) 0:00:13.087 ****** 2025-09-29 06:22:49.150268 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-29 06:22:49.150277 | orchestrator | 2025-09-29 06:22:49.150286 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-29 06:22:49.150294 | orchestrator | Monday 29 September 2025 06:20:53 +0000 (0:00:04.288) 0:00:17.375 ****** 2025-09-29 06:22:49.150303 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:22:49.150312 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-29 06:22:49.150321 | orchestrator | 2025-09-29 06:22:49.150329 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-29 06:22:49.150337 | orchestrator | Monday 29 September 2025 06:20:57 +0000 (0:00:04.717) 0:00:22.092 ****** 2025-09-29 06:22:49.150346 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:22:49.150354 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-29 06:22:49.150363 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-29 06:22:49.150371 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-29 06:22:49.150380 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-29 06:22:49.150389 | orchestrator | 2025-09-29 06:22:49.150397 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-29 06:22:49.150406 | orchestrator | Monday 29 September 2025 06:21:14 +0000 (0:00:17.008) 0:00:39.101 ****** 2025-09-29 06:22:49.150441 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-29 06:22:49.150448 | orchestrator | 2025-09-29 06:22:49.150457 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-29 06:22:49.150464 | orchestrator | Monday 29 September 2025 06:21:19 +0000 (0:00:04.631) 0:00:43.732 ****** 2025-09-29 06:22:49.150475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150607 | orchestrator | 2025-09-29 06:22:49.150614 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-29 06:22:49.150623 | orchestrator | Monday 29 September 2025 06:21:21 +0000 (0:00:02.205) 0:00:45.937 ****** 2025-09-29 06:22:49.150632 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-29 06:22:49.150641 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-29 06:22:49.150649 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-29 06:22:49.150658 | orchestrator | 2025-09-29 06:22:49.150667 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-29 06:22:49.150675 | orchestrator | Monday 29 September 2025 06:21:23 +0000 (0:00:01.791) 0:00:47.728 ****** 2025-09-29 06:22:49.150684 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.150692 | orchestrator | 2025-09-29 06:22:49.150701 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-29 06:22:49.150710 | orchestrator | Monday 29 September 2025 06:21:23 +0000 (0:00:00.203) 0:00:47.932 ****** 2025-09-29 06:22:49.150718 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.150727 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.150736 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.150745 | orchestrator | 2025-09-29 06:22:49.150753 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-29 06:22:49.150762 | orchestrator | Monday 29 September 2025 06:21:24 +0000 (0:00:00.609) 0:00:48.542 ****** 2025-09-29 06:22:49.150771 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:22:49.150779 | orchestrator | 2025-09-29 06:22:49.150788 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-29 06:22:49.150797 | orchestrator | Monday 29 September 2025 06:21:24 +0000 (0:00:00.562) 0:00:49.104 ****** 2025-09-29 06:22:49.150807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.150855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.150931 | orchestrator | 2025-09-29 06:22:49.150940 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-29 06:22:49.150948 | orchestrator | Monday 29 September 2025 06:21:28 +0000 (0:00:03.909) 0:00:53.014 ****** 2025-09-29 06:22:49.150961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.150970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.150979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.150988 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.151007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151039 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.151047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151079 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.151088 | orchestrator | 2025-09-29 06:22:49.151095 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-29 06:22:49.151103 | orchestrator | Monday 29 September 2025 06:21:30 +0000 (0:00:02.188) 0:00:55.202 ****** 2025-09-29 06:22:49.151117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151146 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.151154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151186 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.151201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151230 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.151238 | orchestrator | 2025-09-29 06:22:49.151247 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-29 06:22:49.151255 | orchestrator | Monday 29 September 2025 06:21:32 +0000 (0:00:02.008) 0:00:57.210 ****** 2025-09-29 06:22:49.151264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151370 | orchestrator | 2025-09-29 06:22:49.151378 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-29 06:22:49.151386 | orchestrator | Monday 29 September 2025 06:21:36 +0000 (0:00:03.890) 0:01:01.100 ****** 2025-09-29 06:22:49.151394 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.151402 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:22:49.151410 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:22:49.151462 | orchestrator | 2025-09-29 06:22:49.151471 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-29 06:22:49.151480 | orchestrator | Monday 29 September 2025 06:21:39 +0000 (0:00:02.660) 0:01:03.761 ****** 2025-09-29 06:22:49.151488 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:22:49.151497 | orchestrator | 2025-09-29 06:22:49.151505 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-29 06:22:49.151511 | orchestrator | Monday 29 September 2025 06:21:40 +0000 (0:00:01.006) 0:01:04.767 ****** 2025-09-29 06:22:49.151523 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.151530 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.151538 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.151545 | orchestrator | 2025-09-29 06:22:49.151552 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-29 06:22:49.151559 | orchestrator | Monday 29 September 2025 06:21:41 +0000 (0:00:00.623) 0:01:05.390 ****** 2025-09-29 06:22:49.151566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151656 | orchestrator | 2025-09-29 06:22:49.151663 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-29 06:22:49.151674 | orchestrator | Monday 29 September 2025 06:21:51 +0000 (0:00:10.144) 0:01:15.535 ****** 2025-09-29 06:22:49.151681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151717 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.151725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151756 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.151764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-29 06:22:49.151780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:22:49.151795 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.151802 | orchestrator | 2025-09-29 06:22:49.151809 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-29 06:22:49.151816 | orchestrator | Monday 29 September 2025 06:21:53 +0000 (0:00:01.817) 0:01:17.353 ****** 2025-09-29 06:22:49.151822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-29 06:22:49.151861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:22:49.151899 | orchestrator | 2025-09-29 06:22:49.151904 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-29 06:22:49.151909 | orchestrator | Monday 29 September 2025 06:21:56 +0000 (0:00:03.554) 0:01:20.908 ****** 2025-09-29 06:22:49.151913 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:22:49.151918 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:22:49.151922 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:22:49.151926 | orchestrator | 2025-09-29 06:22:49.151934 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-29 06:22:49.151938 | orchestrator | Monday 29 September 2025 06:21:57 +0000 (0:00:00.450) 0:01:21.358 ****** 2025-09-29 06:22:49.151943 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.151947 | orchestrator | 2025-09-29 06:22:49.151952 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-29 06:22:49.151956 | orchestrator | Monday 29 September 2025 06:21:59 +0000 (0:00:02.518) 0:01:23.876 ****** 2025-09-29 06:22:49.151961 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.151965 | orchestrator | 2025-09-29 06:22:49.151969 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-29 06:22:49.151974 | orchestrator | Monday 29 September 2025 06:22:02 +0000 (0:00:02.615) 0:01:26.492 ****** 2025-09-29 06:22:49.151978 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.151983 | orchestrator | 2025-09-29 06:22:49.151987 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-29 06:22:49.151992 | orchestrator | Monday 29 September 2025 06:22:14 +0000 (0:00:11.784) 0:01:38.276 ****** 2025-09-29 06:22:49.151996 | orchestrator | 2025-09-29 06:22:49.152001 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-29 06:22:49.152005 | orchestrator | Monday 29 September 2025 06:22:14 +0000 (0:00:00.062) 0:01:38.338 ****** 2025-09-29 06:22:49.152010 | orchestrator | 2025-09-29 06:22:49.152014 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-29 06:22:49.152019 | orchestrator | Monday 29 September 2025 06:22:14 +0000 (0:00:00.092) 0:01:38.431 ****** 2025-09-29 06:22:49.152023 | orchestrator | 2025-09-29 06:22:49.152027 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-29 06:22:49.152032 | orchestrator | Monday 29 September 2025 06:22:14 +0000 (0:00:00.162) 0:01:38.593 ****** 2025-09-29 06:22:49.152036 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.152041 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:22:49.152045 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:22:49.152050 | orchestrator | 2025-09-29 06:22:49.152054 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-29 06:22:49.152058 | orchestrator | Monday 29 September 2025 06:22:25 +0000 (0:00:10.946) 0:01:49.539 ****** 2025-09-29 06:22:49.152063 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:22:49.152067 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.152072 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:22:49.152076 | orchestrator | 2025-09-29 06:22:49.152081 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-29 06:22:49.152085 | orchestrator | Monday 29 September 2025 06:22:36 +0000 (0:00:10.803) 0:02:00.343 ****** 2025-09-29 06:22:49.152090 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:22:49.152094 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:22:49.152099 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:22:49.152103 | orchestrator | 2025-09-29 06:22:49.152108 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:22:49.152113 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-29 06:22:49.152121 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:22:49.152126 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:22:49.152131 | orchestrator | 2025-09-29 06:22:49.152135 | orchestrator | 2025-09-29 06:22:49.152140 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:22:49.152144 | orchestrator | Monday 29 September 2025 06:22:46 +0000 (0:00:10.819) 0:02:11.162 ****** 2025-09-29 06:22:49.152149 | orchestrator | =============================================================================== 2025-09-29 06:22:49.152153 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.01s 2025-09-29 06:22:49.152160 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.78s 2025-09-29 06:22:49.152165 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.95s 2025-09-29 06:22:49.152169 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.82s 2025-09-29 06:22:49.152174 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.80s 2025-09-29 06:22:49.152178 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.14s 2025-09-29 06:22:49.152183 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.16s 2025-09-29 06:22:49.152187 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.72s 2025-09-29 06:22:49.152192 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.63s 2025-09-29 06:22:49.152196 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.29s 2025-09-29 06:22:49.152201 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.24s 2025-09-29 06:22:49.152205 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.91s 2025-09-29 06:22:49.152210 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.89s 2025-09-29 06:22:49.152214 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.55s 2025-09-29 06:22:49.152218 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.66s 2025-09-29 06:22:49.152223 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.62s 2025-09-29 06:22:49.152227 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.52s 2025-09-29 06:22:49.152235 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.21s 2025-09-29 06:22:49.152239 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.19s 2025-09-29 06:22:49.152244 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.01s 2025-09-29 06:22:49.152248 | orchestrator | 2025-09-29 06:22:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:52.173368 | orchestrator | 2025-09-29 06:22:52 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:22:52.173522 | orchestrator | 2025-09-29 06:22:52 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:52.173830 | orchestrator | 2025-09-29 06:22:52 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:52.174360 | orchestrator | 2025-09-29 06:22:52 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:52.174383 | orchestrator | 2025-09-29 06:22:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:55.207402 | orchestrator | 2025-09-29 06:22:55 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:22:55.207664 | orchestrator | 2025-09-29 06:22:55 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:55.208268 | orchestrator | 2025-09-29 06:22:55 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:55.208996 | orchestrator | 2025-09-29 06:22:55 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:55.209045 | orchestrator | 2025-09-29 06:22:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:22:58.233898 | orchestrator | 2025-09-29 06:22:58 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:22:58.234007 | orchestrator | 2025-09-29 06:22:58 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:22:58.234228 | orchestrator | 2025-09-29 06:22:58 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:22:58.235033 | orchestrator | 2025-09-29 06:22:58 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:22:58.235069 | orchestrator | 2025-09-29 06:22:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:01.260066 | orchestrator | 2025-09-29 06:23:01 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:01.260344 | orchestrator | 2025-09-29 06:23:01 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:01.261320 | orchestrator | 2025-09-29 06:23:01 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:01.262147 | orchestrator | 2025-09-29 06:23:01 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:01.262188 | orchestrator | 2025-09-29 06:23:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:04.295409 | orchestrator | 2025-09-29 06:23:04 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:04.295593 | orchestrator | 2025-09-29 06:23:04 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:04.295976 | orchestrator | 2025-09-29 06:23:04 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:04.296785 | orchestrator | 2025-09-29 06:23:04 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:04.296818 | orchestrator | 2025-09-29 06:23:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:07.328818 | orchestrator | 2025-09-29 06:23:07 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:07.328939 | orchestrator | 2025-09-29 06:23:07 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:07.329860 | orchestrator | 2025-09-29 06:23:07 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:07.333039 | orchestrator | 2025-09-29 06:23:07 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:07.333094 | orchestrator | 2025-09-29 06:23:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:10.371726 | orchestrator | 2025-09-29 06:23:10 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:10.375064 | orchestrator | 2025-09-29 06:23:10 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:10.375769 | orchestrator | 2025-09-29 06:23:10 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:10.376159 | orchestrator | 2025-09-29 06:23:10 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:10.376192 | orchestrator | 2025-09-29 06:23:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:13.422514 | orchestrator | 2025-09-29 06:23:13 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:13.424343 | orchestrator | 2025-09-29 06:23:13 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:13.429945 | orchestrator | 2025-09-29 06:23:13 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:13.431759 | orchestrator | 2025-09-29 06:23:13 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:13.431821 | orchestrator | 2025-09-29 06:23:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:16.472975 | orchestrator | 2025-09-29 06:23:16 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:16.474809 | orchestrator | 2025-09-29 06:23:16 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state STARTED 2025-09-29 06:23:16.476371 | orchestrator | 2025-09-29 06:23:16 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:16.477958 | orchestrator | 2025-09-29 06:23:16 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:16.477996 | orchestrator | 2025-09-29 06:23:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:19.516579 | orchestrator | 2025-09-29 06:23:19 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:19.517769 | orchestrator | 2025-09-29 06:23:19 | INFO  | Task afa7f5cc-7aff-4ca6-8615-a73e497d8d02 is in state SUCCESS 2025-09-29 06:23:19.520258 | orchestrator | 2025-09-29 06:23:19.520300 | orchestrator | 2025-09-29 06:23:19.520370 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:23:19.520385 | orchestrator | 2025-09-29 06:23:19.520678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:23:19.520690 | orchestrator | Monday 29 September 2025 06:20:35 +0000 (0:00:00.302) 0:00:00.302 ****** 2025-09-29 06:23:19.520701 | orchestrator | ok: [testbed-manager] 2025-09-29 06:23:19.520712 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:23:19.520723 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:23:19.520735 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:23:19.520746 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:23:19.520756 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:23:19.520767 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:23:19.520778 | orchestrator | 2025-09-29 06:23:19.520788 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:23:19.520799 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.732) 0:00:01.034 ****** 2025-09-29 06:23:19.520810 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-29 06:23:19.520925 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-29 06:23:19.520940 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-29 06:23:19.521056 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-29 06:23:19.521749 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-29 06:23:19.521768 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-29 06:23:19.522349 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-29 06:23:19.522372 | orchestrator | 2025-09-29 06:23:19.522391 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-29 06:23:19.522410 | orchestrator | 2025-09-29 06:23:19.522470 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-29 06:23:19.522491 | orchestrator | Monday 29 September 2025 06:20:37 +0000 (0:00:00.941) 0:00:01.976 ****** 2025-09-29 06:23:19.522510 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:23:19.522530 | orchestrator | 2025-09-29 06:23:19.522578 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-29 06:23:19.522597 | orchestrator | Monday 29 September 2025 06:20:38 +0000 (0:00:01.365) 0:00:03.341 ****** 2025-09-29 06:23:19.522617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522647 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:23:19.522659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522716 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522747 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522787 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.522819 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:23:19.522835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522881 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522904 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.522979 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.522990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523059 | orchestrator | 2025-09-29 06:23:19.523071 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-29 06:23:19.523082 | orchestrator | Monday 29 September 2025 06:20:42 +0000 (0:00:03.506) 0:00:06.848 ****** 2025-09-29 06:23:19.523093 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:23:19.523105 | orchestrator | 2025-09-29 06:23:19.523115 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-29 06:23:19.523126 | orchestrator | Monday 29 September 2025 06:20:43 +0000 (0:00:01.497) 0:00:08.345 ****** 2025-09-29 06:23:19.523138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:23:19.523154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523176 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523235 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.523246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523285 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523332 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523382 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523412 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523513 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:23:19.523539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.523597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.523658 | orchestrator | 2025-09-29 06:23:19.523669 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-29 06:23:19.523680 | orchestrator | Monday 29 September 2025 06:20:49 +0000 (0:00:05.784) 0:00:14.130 ****** 2025-09-29 06:23:19.523691 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-29 06:23:19.523703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.523718 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.523730 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-29 06:23:19.523754 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.523777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.523815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523827 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:23:19.523839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.523856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.523895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.523918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.523957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.523985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524003 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.524020 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.524037 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.524064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524078 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524098 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.524108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524144 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.524154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524298 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.524308 | orchestrator | 2025-09-29 06:23:19.524318 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-29 06:23:19.524328 | orchestrator | Monday 29 September 2025 06:20:51 +0000 (0:00:01.559) 0:00:15.689 ****** 2025-09-29 06:23:19.524338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-29 06:23:19.524348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-29 06:23:19.524392 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524517 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:23:19.524528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524634 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.524644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-29 06:23:19.524654 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.524663 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.524678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524708 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.524718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524757 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.524767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-29 06:23:19.524777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-29 06:23:19.524803 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.524813 | orchestrator | 2025-09-29 06:23:19.524823 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-29 06:23:19.524833 | orchestrator | Monday 29 September 2025 06:20:52 +0000 (0:00:01.823) 0:00:17.513 ****** 2025-09-29 06:23:19.524843 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-29 06:23:19.524853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.524869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.524883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.524893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.524903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.525084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.525109 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.525126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525198 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525345 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-29 06:23:19.525366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525376 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525486 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.525510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.525541 | orchestrator | 2025-09-29 06:23:19.525550 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-29 06:23:19.525560 | orchestrator | Monday 29 September 2025 06:20:58 +0000 (0:00:05.771) 0:00:23.284 ****** 2025-09-29 06:23:19.525570 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:23:19.525580 | orchestrator | 2025-09-29 06:23:19.525590 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-29 06:23:19.525627 | orchestrator | Monday 29 September 2025 06:20:59 +0000 (0:00:01.033) 0:00:24.318 ****** 2025-09-29 06:23:19.525639 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525657 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525671 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525682 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525697 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525709 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525746 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.525758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525774 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525784 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525808 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1096488, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525818 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525853 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525890 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525900 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525914 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525925 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525959 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1096523, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1789577, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.525976 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525986 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.525996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526006 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526068 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526083 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526093 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526150 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526160 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526170 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526184 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526245 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526257 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526267 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526277 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526291 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526301 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1096356, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.526320 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526356 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526368 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526378 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526388 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526402 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526412 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526480 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526533 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526580 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526598 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526612 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526633 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526684 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526700 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526712 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526720 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1096508, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1775215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.526733 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526748 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526756 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526788 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526797 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526805 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526814 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526825 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526839 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526847 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526876 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526886 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526894 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526914 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526928 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526936 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526975 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526984 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.526992 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527008 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527016 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527025 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527038 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527047 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527055 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527063 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527078 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527087 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527095 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527103 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.527117 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527133 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527142 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527160 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527169 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527177 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527191 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527199 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527208 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527220 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.527228 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527239 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527247 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527256 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527268 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527276 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.527284 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527292 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1096351, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527316 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527325 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.527333 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527341 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527349 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.527361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-29 06:23:19.527369 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.527378 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1096492, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1747346, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527390 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1096506, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1766438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527398 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1096493, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1751559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527410 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1096486, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.173926, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096522, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527426 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096347, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1432319, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527458 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1096543, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1812503, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527467 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1096520, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.178498, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1096354, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.144784, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527495 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1096349, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1440637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527513 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1096503, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1763258, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527527 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1096497, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1759725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1096538, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1809304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-29 06:23:19.527553 | orchestrator | 2025-09-29 06:23:19.527566 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-29 06:23:19.527579 | orchestrator | Monday 29 September 2025 06:21:27 +0000 (0:00:27.975) 0:00:52.293 ****** 2025-09-29 06:23:19.527593 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:23:19.527606 | orchestrator | 2025-09-29 06:23:19.527625 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-29 06:23:19.527640 | orchestrator | Monday 29 September 2025 06:21:28 +0000 (0:00:01.240) 0:00:53.534 ****** 2025-09-29 06:23:19.527654 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527668 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527681 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527700 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527709 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527717 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:23:19.527724 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527732 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527740 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527755 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527763 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527770 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527778 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527793 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527801 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527809 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527817 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527832 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527840 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527847 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527855 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527863 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527870 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527878 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527885 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527893 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527901 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527909 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527916 | orchestrator | [WARNING]: Skipped 2025-09-29 06:23:19.527924 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527932 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-29 06:23:19.527939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-29 06:23:19.527947 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-29 06:23:19.527954 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-29 06:23:19.527966 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-29 06:23:19.527974 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 06:23:19.527982 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-29 06:23:19.527989 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-29 06:23:19.527997 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-29 06:23:19.528004 | orchestrator | 2025-09-29 06:23:19.528012 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-29 06:23:19.528020 | orchestrator | Monday 29 September 2025 06:21:32 +0000 (0:00:03.877) 0:00:57.411 ****** 2025-09-29 06:23:19.528028 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528036 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528048 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528056 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528063 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528071 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528078 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528086 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528094 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528102 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528109 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-29 06:23:19.528117 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528158 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "{{ prometheus_blackbox_exporter_endpoints_default | selectattr('enabled', 'true') | map(attribute='endpoints') | flatten | union(prometheus_blackbox_exporter_endpoints_custom) | unique | select | list }}: [{'endpoints': ['aodh:os_endpoint:{{ aodh_public_endpoint }}', \"{{ ('aodh_internal:os_endpoint:' + aodh_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_aodh | bool }}'}, {'endpoints': ['barbican:os_endpoint:{{ barbican_public_endpoint }}', \"{{ ('barbican_internal:os_endpoint:' + barbican_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_barbican | bool }}'}, {'endpoints': ['blazar:os_endpoint:{{ blazar_public_base_endpoint }}', \"{{ ('blazar_internal:os_endpoint:' + blazar_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_blazar | bool }}'}, {'endpoints': ['ceph_rgw:http_2xx:{{ ceph_rgw_public_base_endpoint }}', \"{{ ('ceph_rgw_internal:http_2xx:' + ceph_rgw_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ceph_rgw | bool }}'}, {'endpoints': ['cinder:os_endpoint:{{ cinder_public_base_endpoint }}', \"{{ ('cinder_internal:os_endpoint:' + cinder_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cinder | bool }}'}, {'endpoints': ['cloudkitty:os_endpoint:{{ cloudkitty_public_endpoint }}', \"{{ ('cloudkitty_internal:os_endpoint:' + cloudkitty_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_cloudkitty | bool }}'}, {'endpoints': ['designate:os_endpoint:{{ designate_public_endpoint }}', \"{{ ('designate_internal:os_endpoint:' + designate_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_designate | bool }}'}, {'endpoints': ['glance:os_endpoint:{{ glance_public_endpoint }}', \"{{ ('glance_internal:os_endpoint:' + glance_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_glance | bool }}'}, {'endpoints': ['gnocchi:os_endpoint:{{ gnocchi_public_endpoint }}', \"{{ ('gnocchi_internal:os_endpoint:' + gnocchi_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_gnocchi | bool }}'}, {'endpoints': ['heat:os_endpoint:{{ heat_public_base_endpoint }}', \"{{ ('heat_internal:os_endpoint:' + heat_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'heat_cfn:os_endpoint:{{ heat_cfn_public_base_endpoint }}', \"{{ ('heat_cfn_internal:os_endpoint:' + heat_cfn_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_heat | bool }}'}, {'endpoints': ['horizon:http_2xx:{{ horizon_public_endpoint }}', \"{{ ('horizon_internal:http_2xx:' + horizon_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_horizon | bool }}'}, {'endpoints': ['ironic:os_endpoint:{{ ironic_public_endpoint }}', \"{{ ('ironic_internal:os_endpoint:' + ironic_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'ironic_inspector:os_endpoint:{{ ironic_inspector_public_endpoint }}', \"{{ ('ironic_inspector_internal:os_endpoint:' + ironic_inspector_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_ironic | bool }}'}, {'endpoints': ['keystone:os_endpoint:{{ keystone_public_url }}', \"{{ ('keystone_internal:os_endpoint:' + keystone_internal_url) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_keystone | bool }}'}, {'endpoints': ['magnum:os_endpoint:{{ magnum_public_base_endpoint }}', \"{{ ('magnum_internal:os_endpoint:' + magnum_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_magnum | bool }}'}, {'endpoints': ['manila:os_endpoint:{{ manila_public_base_endpoint }}', \"{{ ('manila_internal:os_endpoint:' + manila_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_manila | bool }}'}, {'endpoints': ['masakari:os_endpoint:{{ masakari_public_endpoint }}', \"{{ ('masakari_internal:os_endpoint:' + masakari_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_masakari | bool }}'}, {'endpoints': ['mistral:os_endpoint:{{ mistral_public_base_endpoint }}', \"{{ ('mistral_internal:os_endpoint:' + mistral_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_mistral | bool }}'}, {'endpoints': ['neutron:os_endpoint:{{ neutron_public_endpoint }}', \"{{ ('neutron_internal:os_endpoint:' + neutron_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_neutron | bool }}'}, {'endpoints': ['nova:os_endpoint:{{ nova_public_base_endpoint }}', \"{{ ('nova_internal:os_endpoint:' + nova_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_nova | bool }}'}, {'endpoints': ['octavia:os_endpoint:{{ octavia_public_endpoint }}', \"{{ ('octavia_internal:os_endpoint:' + octavia_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_octavia | bool }}'}, {'endpoints': ['placement:os_endpoint:{{ placement_public_endpoint }}', \"{{ ('placement_internal:os_endpoint:' + placement_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_placement | bool }}'}, {'endpoints': ['skyline_apiserver:os_endpoint:{{ skyline_apiserver_public_endpoint }}', \"{{ ('skyline_apiserver_internal:os_endpoint:' + skyline_apiserver_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\", 'skyline_console:os_endpoint:{{ skyline_console_public_endpoint }}', \"{{ ('skyline_console_internal:os_endpoint:' + skyline_console_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_skyline | bool }}'}, {'endpoints': ['swift:os_endpoint:{{ swift_public_base_endpoint }}', \"{{ ('swift_internal:os_endpoint:' + swift_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_swift | bool }}'}, {'endpoints': ['tacker:os_endpoint:{{ tacker_public_endpoint }}', \"{{ ('tacker_internal:os_endpoint:' + tacker_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_tacker | bool }}'}, {'endpoints': ['trove:os_endpoint:{{ trove_public_base_endpoint }}', \"{{ ('trove_internal:os_endpoint:' + trove_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_trove | bool }}'}, {'endpoints': ['venus:os_endpoint:{{ venus_public_endpoint }}', \"{{ ('venus_internal:os_endpoint:' + venus_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_venus | bool }}'}, {'endpoints': ['watcher:os_endpoint:{{ watcher_public_endpoint }}', \"{{ ('watcher_internal:os_endpoint:' + watcher_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_watcher | bool }}'}, {'endpoints': ['zun:os_endpoint:{{ zun_public_base_endpoint }}', \"{{ ('zun_internal:os_endpoint:' + zun_internal_base_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_zun | bool }}'}, {'endpoints': \"{% set etcd_endpoints = [] %}{% for host in groups.get('etcd', []) %}{{ etcd_endpoints.append('etcd_' + host + ':http_2xx:' + hostvars[host]['etcd_protocol'] + '://' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['etcd_client_port'] + '/metrics')}}{% endfor %}{{ etcd_endpoints }}\", 'enabled': '{{ enable_etcd | bool }}'}, {'endpoints': ['grafana:http_2xx:{{ grafana_public_endpoint }}', \"{{ ('grafana_internal:http_2xx:' + grafana_internal_endpoint) if not kolla_same_external_internal_vip | bool }}\"], 'enabled': '{{ enable_grafana | bool }}'}, {'endpoints': ['opensearch:http_2xx:{{ opensearch_internal_endpoint }}'], 'enabled': '{{ enable_opensearch | bool }}'}, {'endpoints': ['opensearch_dashboards:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_internal_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards | bool }}'}, {'endpoints': ['opensearch_dashboards_external:http_2xx_opensearch_dashboards:{{ opensearch_dashboards_external_endpoint }}/api/status'], 'enabled': '{{ enable_opensearch_dashboards_external | bool }}'}, {'endpoints': ['prometheus:http_2xx_prometheus:{{ prometheus_public_endpoint if enable_prometheus_server_external else prometheus_internal_endpoint }}/-/healthy'], 'enabled': '{{ enable_prometheus | bool }}'}, {'endpoints': ['prometheus_alertmanager:http_2xx_alertmanager:{{ prometheus_alertmanager_public_endpoint if enable_prometheus_alertmanager_external else prometheus_alertmanager_internal_endpoint }}'], 'enabled': '{{ enable_prometheus_alertmanager | bool }}'}, {'endpoints': \"{% set rabbitmq_endpoints = [] %}{% for host in groups.get('rabbitmq', []) %}{{ rabbitmq_endpoints.append('rabbitmq_' + host + (':tls_connect:' if rabbitmq_enable_tls | bool else ':tcp_connect:') + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['rabbitmq_port'] ) }}{% endfor %}{{ rabbitmq_endpoints }}\", 'enabled': '{{ enable_rabbitmq | bool }}'}, {'endpoints': \"{% set redis_endpoints = [] %}{% for host in groups.get('redis', []) %}{{ redis_endpoints.append('redis_' + host + ':tcp_connect:' + ('api' | kolla_address(host) | put_address_in_context('url')) + ':' + hostvars[host]['redis_port']) }}{% endfor %}{{ redis_endpoints }}\", 'enabled': '{{ enable_redis | bool }}'}]: 'swift_public_base_endpoint' is undefined"} 2025-09-29 06:23:19.528183 | orchestrator | 2025-09-29 06:23:19.528191 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-29 06:23:19.528199 | orchestrator | Monday 29 September 2025 06:21:51 +0000 (0:00:18.602) 0:01:16.014 ****** 2025-09-29 06:23:19.528206 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528214 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528222 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528230 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528237 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528245 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528253 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528261 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528268 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528276 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528284 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-29 06:23:19.528292 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528299 | orchestrator | 2025-09-29 06:23:19.528307 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-29 06:23:19.528319 | orchestrator | Monday 29 September 2025 06:21:54 +0000 (0:00:03.377) 0:01:19.391 ****** 2025-09-29 06:23:19.528331 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528339 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528347 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528355 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528363 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528371 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528378 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528386 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528394 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528402 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528410 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-29 06:23:19.528417 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528425 | orchestrator | 2025-09-29 06:23:19.528456 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-29 06:23:19.528470 | orchestrator | Monday 29 September 2025 06:21:56 +0000 (0:00:02.023) 0:01:21.414 ****** 2025-09-29 06:23:19.528483 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:23:19.528496 | orchestrator | 2025-09-29 06:23:19.528507 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-29 06:23:19.528521 | orchestrator | Monday 29 September 2025 06:21:58 +0000 (0:00:01.991) 0:01:23.406 ****** 2025-09-29 06:23:19.528533 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528548 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528559 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528566 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528574 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528588 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528601 | orchestrator | 2025-09-29 06:23:19.528620 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-29 06:23:19.528634 | orchestrator | Monday 29 September 2025 06:21:59 +0000 (0:00:00.618) 0:01:24.024 ****** 2025-09-29 06:23:19.528648 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528662 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528671 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.528679 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.528686 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528694 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.528702 | orchestrator | 2025-09-29 06:23:19.528710 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-29 06:23:19.528717 | orchestrator | Monday 29 September 2025 06:22:01 +0000 (0:00:02.606) 0:01:26.631 ****** 2025-09-29 06:23:19.528725 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528732 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528740 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528748 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528755 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528763 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528770 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528784 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528792 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528800 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528807 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-29 06:23:19.528815 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528823 | orchestrator | 2025-09-29 06:23:19.528830 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-29 06:23:19.528838 | orchestrator | Monday 29 September 2025 06:22:03 +0000 (0:00:01.964) 0:01:28.596 ****** 2025-09-29 06:23:19.528846 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528854 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.528861 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528869 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528877 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528885 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.528892 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528900 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.528907 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528915 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.528923 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-29 06:23:19.528931 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.528938 | orchestrator | 2025-09-29 06:23:19.528951 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-29 06:23:19.528959 | orchestrator | Monday 29 September 2025 06:22:06 +0000 (0:00:02.552) 0:01:31.149 ****** 2025-09-29 06:23:19.528966 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.528974 | orchestrator | 2025-09-29 06:23:19.528981 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-29 06:23:19.528989 | orchestrator | Monday 29 September 2025 06:22:07 +0000 (0:00:00.605) 0:01:31.755 ****** 2025-09-29 06:23:19.528997 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.529004 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.529012 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.529019 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.529027 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.529034 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.529042 | orchestrator | 2025-09-29 06:23:19.529050 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-29 06:23:19.529057 | orchestrator | Monday 29 September 2025 06:22:07 +0000 (0:00:00.472) 0:01:32.227 ****** 2025-09-29 06:23:19.529065 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:19.529072 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:19.529080 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:19.529087 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:23:19.529095 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:23:19.529103 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:23:19.529110 | orchestrator | 2025-09-29 06:23:19.529118 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-29 06:23:19.529126 | orchestrator | Monday 29 September 2025 06:22:08 +0000 (0:00:00.857) 0:01:33.084 ****** 2025-09-29 06:23:19.529139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529170 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-29 06:23:19.529199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529300 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529338 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-29 06:23:19.529350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-29 06:23:19.529379 | orchestrator | 2025-09-29 06:23:19.529387 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-29 06:23:19.529395 | orchestrator | Monday 29 September 2025 06:22:12 +0000 (0:00:04.288) 0:01:37.373 ****** 2025-09-29 06:23:19.529403 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:23:19.529411 | orchestrator | 2025-09-29 06:23:19.529418 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529426 | orchestrator | Monday 29 September 2025 06:22:16 +0000 (0:00:04.025) 0:01:41.399 ****** 2025-09-29 06:23:19.529474 | orchestrator | 2025-09-29 06:23:19.529487 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529495 | orchestrator | Monday 29 September 2025 06:22:16 +0000 (0:00:00.126) 0:01:41.526 ****** 2025-09-29 06:23:19.529503 | orchestrator | 2025-09-29 06:23:19.529510 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529518 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.159) 0:01:41.685 ****** 2025-09-29 06:23:19.529526 | orchestrator | 2025-09-29 06:23:19.529534 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529542 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.078) 0:01:41.764 ****** 2025-09-29 06:23:19.529549 | orchestrator | 2025-09-29 06:23:19.529557 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529565 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.061) 0:01:41.825 ****** 2025-09-29 06:23:19.529573 | orchestrator | 2025-09-29 06:23:19.529579 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-29 06:23:19.529586 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.279) 0:01:42.105 ****** 2025-09-29 06:23:19.529593 | orchestrator | 2025-09-29 06:23:19.529599 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-29 06:23:19.529606 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.083) 0:01:42.188 ****** 2025-09-29 06:23:19.529613 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.529619 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.529626 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:23:19.529632 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.529639 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:23:19.529645 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:23:19.529652 | orchestrator | 2025-09-29 06:23:19.529658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-29 06:23:19.529665 | orchestrator | Monday 29 September 2025 06:22:25 +0000 (0:00:08.340) 0:01:50.529 ****** 2025-09-29 06:23:19.529671 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.529678 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.529685 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.529691 | orchestrator | 2025-09-29 06:23:19.529698 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-29 06:23:19.529704 | orchestrator | Monday 29 September 2025 06:22:37 +0000 (0:00:11.587) 0:02:02.117 ****** 2025-09-29 06:23:19.529711 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.529717 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.529724 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.529730 | orchestrator | 2025-09-29 06:23:19.529737 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-29 06:23:19.529748 | orchestrator | Monday 29 September 2025 06:22:45 +0000 (0:00:07.719) 0:02:09.836 ****** 2025-09-29 06:23:19.529755 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:23:19.529762 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:23:19.529768 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:23:19.529775 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.529781 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.529788 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.529794 | orchestrator | 2025-09-29 06:23:19.529801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-29 06:23:19.529807 | orchestrator | Monday 29 September 2025 06:23:00 +0000 (0:00:15.302) 0:02:25.138 ****** 2025-09-29 06:23:19.529814 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:19.529821 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:19.529827 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:19.529834 | orchestrator | 2025-09-29 06:23:19.529840 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-29 06:23:19.529850 | orchestrator | Monday 29 September 2025 06:23:07 +0000 (0:00:06.641) 0:02:31.780 ****** 2025-09-29 06:23:19.529857 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:23:19.529864 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:23:19.529870 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:23:19.529877 | orchestrator | 2025-09-29 06:23:19.529883 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:23:19.529890 | orchestrator | testbed-manager : ok=11  changed=4  unreachable=0 failed=1  skipped=2  rescued=0 ignored=0 2025-09-29 06:23:19.529897 | orchestrator | testbed-node-0 : ok=17  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-29 06:23:19.529904 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-29 06:23:19.529911 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-29 06:23:19.529918 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-29 06:23:19.529924 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-29 06:23:19.529931 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-29 06:23:19.529937 | orchestrator | 2025-09-29 06:23:19.529944 | orchestrator | 2025-09-29 06:23:19.529950 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:23:19.529957 | orchestrator | Monday 29 September 2025 06:23:18 +0000 (0:00:10.958) 0:02:42.739 ****** 2025-09-29 06:23:19.529967 | orchestrator | =============================================================================== 2025-09-29 06:23:19.529974 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.98s 2025-09-29 06:23:19.529980 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 18.60s 2025-09-29 06:23:19.529987 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.30s 2025-09-29 06:23:19.529994 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.59s 2025-09-29 06:23:19.530000 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.96s 2025-09-29 06:23:19.530007 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 8.34s 2025-09-29 06:23:19.530013 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.72s 2025-09-29 06:23:19.530046 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.64s 2025-09-29 06:23:19.530059 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.78s 2025-09-29 06:23:19.530066 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.77s 2025-09-29 06:23:19.530073 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.29s 2025-09-29 06:23:19.530079 | orchestrator | prometheus : Creating prometheus database user and setting permissions --- 4.03s 2025-09-29 06:23:19.530086 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.88s 2025-09-29 06:23:19.530092 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.51s 2025-09-29 06:23:19.530099 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.38s 2025-09-29 06:23:19.530105 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.61s 2025-09-29 06:23:19.530112 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.55s 2025-09-29 06:23:19.530119 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.02s 2025-09-29 06:23:19.530125 | orchestrator | prometheus : Find custom Alertmanager alert notification templates ------ 1.99s 2025-09-29 06:23:19.530132 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 1.96s 2025-09-29 06:23:19.530138 | orchestrator | 2025-09-29 06:23:19 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:19.530145 | orchestrator | 2025-09-29 06:23:19 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:19.530152 | orchestrator | 2025-09-29 06:23:19 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:19.530158 | orchestrator | 2025-09-29 06:23:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:22.571541 | orchestrator | 2025-09-29 06:23:22 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:22.575705 | orchestrator | 2025-09-29 06:23:22 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:22.578578 | orchestrator | 2025-09-29 06:23:22 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:22.582755 | orchestrator | 2025-09-29 06:23:22 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:22.582910 | orchestrator | 2025-09-29 06:23:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:25.627699 | orchestrator | 2025-09-29 06:23:25 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:25.628253 | orchestrator | 2025-09-29 06:23:25 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:25.631257 | orchestrator | 2025-09-29 06:23:25 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:25.632945 | orchestrator | 2025-09-29 06:23:25 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:25.632985 | orchestrator | 2025-09-29 06:23:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:28.680426 | orchestrator | 2025-09-29 06:23:28 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:28.682704 | orchestrator | 2025-09-29 06:23:28 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:28.685138 | orchestrator | 2025-09-29 06:23:28 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:28.686877 | orchestrator | 2025-09-29 06:23:28 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:28.687138 | orchestrator | 2025-09-29 06:23:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:31.723822 | orchestrator | 2025-09-29 06:23:31 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:31.724430 | orchestrator | 2025-09-29 06:23:31 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:31.725222 | orchestrator | 2025-09-29 06:23:31 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:31.725692 | orchestrator | 2025-09-29 06:23:31 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:31.725717 | orchestrator | 2025-09-29 06:23:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:34.776106 | orchestrator | 2025-09-29 06:23:34 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:34.777761 | orchestrator | 2025-09-29 06:23:34 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:34.779730 | orchestrator | 2025-09-29 06:23:34 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:34.782163 | orchestrator | 2025-09-29 06:23:34 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:34.782227 | orchestrator | 2025-09-29 06:23:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:37.830711 | orchestrator | 2025-09-29 06:23:37 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:37.832602 | orchestrator | 2025-09-29 06:23:37 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:37.834373 | orchestrator | 2025-09-29 06:23:37 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:37.836121 | orchestrator | 2025-09-29 06:23:37 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:37.836161 | orchestrator | 2025-09-29 06:23:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:40.883132 | orchestrator | 2025-09-29 06:23:40 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:40.886159 | orchestrator | 2025-09-29 06:23:40 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:40.887973 | orchestrator | 2025-09-29 06:23:40 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:40.890549 | orchestrator | 2025-09-29 06:23:40 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:40.890592 | orchestrator | 2025-09-29 06:23:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:43.939850 | orchestrator | 2025-09-29 06:23:43 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:43.941746 | orchestrator | 2025-09-29 06:23:43 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:43.944021 | orchestrator | 2025-09-29 06:23:43 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:43.946395 | orchestrator | 2025-09-29 06:23:43 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:43.946485 | orchestrator | 2025-09-29 06:23:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:46.992929 | orchestrator | 2025-09-29 06:23:46 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:46.994969 | orchestrator | 2025-09-29 06:23:46 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state STARTED 2025-09-29 06:23:46.997154 | orchestrator | 2025-09-29 06:23:46 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:46.998383 | orchestrator | 2025-09-29 06:23:46 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:46.998419 | orchestrator | 2025-09-29 06:23:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:50.042678 | orchestrator | 2025-09-29 06:23:50 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:23:50.044193 | orchestrator | 2025-09-29 06:23:50 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:50.047990 | orchestrator | 2025-09-29 06:23:50 | INFO  | Task 7e07d07d-79c7-4ab3-b1e2-bbb7ff013805 is in state SUCCESS 2025-09-29 06:23:50.050238 | orchestrator | 2025-09-29 06:23:50.050293 | orchestrator | 2025-09-29 06:23:50.050304 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:23:50.050314 | orchestrator | 2025-09-29 06:23:50.050320 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:23:50.050327 | orchestrator | Monday 29 September 2025 06:20:42 +0000 (0:00:00.346) 0:00:00.346 ****** 2025-09-29 06:23:50.050346 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:23:50.050354 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:23:50.050361 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:23:50.050367 | orchestrator | 2025-09-29 06:23:50.050381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:23:50.050387 | orchestrator | Monday 29 September 2025 06:20:42 +0000 (0:00:00.262) 0:00:00.609 ****** 2025-09-29 06:23:50.050394 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-29 06:23:50.050401 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-29 06:23:50.050408 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-29 06:23:50.050414 | orchestrator | 2025-09-29 06:23:50.050420 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-29 06:23:50.050430 | orchestrator | 2025-09-29 06:23:50.050436 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-29 06:23:50.050467 | orchestrator | Monday 29 September 2025 06:20:43 +0000 (0:00:00.424) 0:00:01.034 ****** 2025-09-29 06:23:50.050474 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:23:50.050481 | orchestrator | 2025-09-29 06:23:50.050488 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-29 06:23:50.050495 | orchestrator | Monday 29 September 2025 06:20:43 +0000 (0:00:00.484) 0:00:01.518 ****** 2025-09-29 06:23:50.050501 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-29 06:23:50.050505 | orchestrator | 2025-09-29 06:23:50.050510 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-29 06:23:50.050514 | orchestrator | Monday 29 September 2025 06:20:47 +0000 (0:00:03.753) 0:00:05.272 ****** 2025-09-29 06:23:50.050520 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-29 06:23:50.050529 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-29 06:23:50.050539 | orchestrator | 2025-09-29 06:23:50.050544 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-29 06:23:50.050551 | orchestrator | Monday 29 September 2025 06:20:54 +0000 (0:00:06.995) 0:00:12.267 ****** 2025-09-29 06:23:50.050557 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:23:50.050563 | orchestrator | 2025-09-29 06:23:50.050570 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-29 06:23:50.050577 | orchestrator | Monday 29 September 2025 06:20:58 +0000 (0:00:03.511) 0:00:15.778 ****** 2025-09-29 06:23:50.050584 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:23:50.050589 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-29 06:23:50.050593 | orchestrator | 2025-09-29 06:23:50.050597 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-29 06:23:50.050634 | orchestrator | Monday 29 September 2025 06:21:02 +0000 (0:00:04.316) 0:00:20.095 ****** 2025-09-29 06:23:50.050642 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:23:50.050646 | orchestrator | 2025-09-29 06:23:50.050650 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-29 06:23:50.050654 | orchestrator | Monday 29 September 2025 06:21:06 +0000 (0:00:04.367) 0:00:24.463 ****** 2025-09-29 06:23:50.050658 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-29 06:23:50.050663 | orchestrator | 2025-09-29 06:23:50.050669 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-29 06:23:50.050676 | orchestrator | Monday 29 September 2025 06:21:11 +0000 (0:00:04.351) 0:00:28.814 ****** 2025-09-29 06:23:50.050684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050816 | orchestrator | 2025-09-29 06:23:50.050820 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-29 06:23:50.050824 | orchestrator | Monday 29 September 2025 06:21:13 +0000 (0:00:02.653) 0:00:31.467 ****** 2025-09-29 06:23:50.050828 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.050832 | orchestrator | 2025-09-29 06:23:50.050836 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-29 06:23:50.050840 | orchestrator | Monday 29 September 2025 06:21:13 +0000 (0:00:00.102) 0:00:31.570 ****** 2025-09-29 06:23:50.050843 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.050847 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.050851 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.050854 | orchestrator | 2025-09-29 06:23:50.050858 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-29 06:23:50.050862 | orchestrator | Monday 29 September 2025 06:21:14 +0000 (0:00:00.245) 0:00:31.815 ****** 2025-09-29 06:23:50.050866 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:23:50.050870 | orchestrator | 2025-09-29 06:23:50.050873 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-29 06:23:50.050877 | orchestrator | Monday 29 September 2025 06:21:15 +0000 (0:00:01.212) 0:00:33.027 ****** 2025-09-29 06:23:50.050881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.050971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.050984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051065 | orchestrator | 2025-09-29 06:23:50.051069 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-29 06:23:50.051073 | orchestrator | Monday 29 September 2025 06:21:22 +0000 (0:00:07.074) 0:00:40.101 ****** 2025-09-29 06:23:50.051077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051117 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.051121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051161 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.051165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051206 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.051210 | orchestrator | 2025-09-29 06:23:50.051214 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-29 06:23:50.051218 | orchestrator | Monday 29 September 2025 06:21:23 +0000 (0:00:00.981) 0:00:41.083 ****** 2025-09-29 06:23:50.051222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051264 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.051268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051306 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.051310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.051325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051353 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.051357 | orchestrator | 2025-09-29 06:23:50.051361 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-29 06:23:50.051365 | orchestrator | Monday 29 September 2025 06:21:24 +0000 (0:00:01.428) 0:00:42.511 ****** 2025-09-29 06:23:50.051369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051560 | orchestrator | 2025-09-29 06:23:50.051566 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-29 06:23:50.051573 | orchestrator | Monday 29 September 2025 06:21:31 +0000 (0:00:06.704) 0:00:49.216 ****** 2025-09-29 06:23:50.051580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.051615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051697 | orchestrator | 2025-09-29 06:23:50.051701 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-29 06:23:50.051705 | orchestrator | Monday 29 September 2025 06:21:54 +0000 (0:00:22.716) 0:01:11.932 ****** 2025-09-29 06:23:50.051708 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-29 06:23:50.051712 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-29 06:23:50.051716 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-29 06:23:50.051720 | orchestrator | 2025-09-29 06:23:50.051723 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-29 06:23:50.051727 | orchestrator | Monday 29 September 2025 06:22:00 +0000 (0:00:05.947) 0:01:17.880 ****** 2025-09-29 06:23:50.051731 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-29 06:23:50.051734 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-29 06:23:50.051738 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-29 06:23:50.051745 | orchestrator | 2025-09-29 06:23:50.051749 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-29 06:23:50.051752 | orchestrator | Monday 29 September 2025 06:22:04 +0000 (0:00:03.986) 0:01:21.866 ****** 2025-09-29 06:23:50.051756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051858 | orchestrator | 2025-09-29 06:23:50.051862 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-29 06:23:50.051865 | orchestrator | Monday 29 September 2025 06:22:07 +0000 (0:00:03.356) 0:01:25.223 ****** 2025-09-29 06:23:50.051869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.051909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.051982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.051997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052005 | orchestrator | 2025-09-29 06:23:50.052010 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-29 06:23:50.052013 | orchestrator | Monday 29 September 2025 06:22:10 +0000 (0:00:02.961) 0:01:28.185 ****** 2025-09-29 06:23:50.052017 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.052021 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.052025 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.052029 | orchestrator | 2025-09-29 06:23:50.052032 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-29 06:23:50.052036 | orchestrator | Monday 29 September 2025 06:22:10 +0000 (0:00:00.293) 0:01:28.479 ****** 2025-09-29 06:23:50.052040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.052044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.052051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052074 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.052078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.052082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.052088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052120 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.052124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-29 06:23:50.052127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-29 06:23:50.052131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-29 06:23:50.052155 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.052159 | orchestrator | 2025-09-29 06:23:50.052162 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-29 06:23:50.052166 | orchestrator | Monday 29 September 2025 06:22:12 +0000 (0:00:01.931) 0:01:30.410 ****** 2025-09-29 06:23:50.052170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.052174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.052180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-29 06:23:50.052200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-29 06:23:50.052278 | orchestrator | 2025-09-29 06:23:50.052282 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-29 06:23:50.052286 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:04.464) 0:01:34.874 ****** 2025-09-29 06:23:50.052290 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:23:50.052293 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:23:50.052297 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:23:50.052301 | orchestrator | 2025-09-29 06:23:50.052305 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-29 06:23:50.052308 | orchestrator | Monday 29 September 2025 06:22:17 +0000 (0:00:00.499) 0:01:35.373 ****** 2025-09-29 06:23:50.052312 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-29 06:23:50.052316 | orchestrator | 2025-09-29 06:23:50.052320 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-29 06:23:50.052324 | orchestrator | Monday 29 September 2025 06:22:19 +0000 (0:00:02.085) 0:01:37.459 ****** 2025-09-29 06:23:50.052327 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:23:50.052331 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-29 06:23:50.052335 | orchestrator | 2025-09-29 06:23:50.052339 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-29 06:23:50.052342 | orchestrator | Monday 29 September 2025 06:22:22 +0000 (0:00:02.841) 0:01:40.300 ****** 2025-09-29 06:23:50.052346 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052350 | orchestrator | 2025-09-29 06:23:50.052354 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-29 06:23:50.052357 | orchestrator | Monday 29 September 2025 06:22:39 +0000 (0:00:16.891) 0:01:57.192 ****** 2025-09-29 06:23:50.052361 | orchestrator | 2025-09-29 06:23:50.052365 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-29 06:23:50.052368 | orchestrator | Monday 29 September 2025 06:22:39 +0000 (0:00:00.243) 0:01:57.436 ****** 2025-09-29 06:23:50.052372 | orchestrator | 2025-09-29 06:23:50.052376 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-29 06:23:50.052380 | orchestrator | Monday 29 September 2025 06:22:39 +0000 (0:00:00.064) 0:01:57.500 ****** 2025-09-29 06:23:50.052383 | orchestrator | 2025-09-29 06:23:50.052387 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-29 06:23:50.052391 | orchestrator | Monday 29 September 2025 06:22:39 +0000 (0:00:00.065) 0:01:57.566 ****** 2025-09-29 06:23:50.052394 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052398 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052402 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052406 | orchestrator | 2025-09-29 06:23:50.052409 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-29 06:23:50.052417 | orchestrator | Monday 29 September 2025 06:22:49 +0000 (0:00:09.333) 0:02:06.899 ****** 2025-09-29 06:23:50.052421 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052425 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052428 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052432 | orchestrator | 2025-09-29 06:23:50.052436 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-29 06:23:50.052440 | orchestrator | Monday 29 September 2025 06:23:01 +0000 (0:00:12.311) 0:02:19.211 ****** 2025-09-29 06:23:50.052459 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052463 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052466 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052470 | orchestrator | 2025-09-29 06:23:50.052474 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-29 06:23:50.052477 | orchestrator | Monday 29 September 2025 06:23:07 +0000 (0:00:06.128) 0:02:25.340 ****** 2025-09-29 06:23:50.052481 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052485 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052489 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052493 | orchestrator | 2025-09-29 06:23:50.052497 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-29 06:23:50.052501 | orchestrator | Monday 29 September 2025 06:23:19 +0000 (0:00:11.340) 0:02:36.681 ****** 2025-09-29 06:23:50.052504 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052508 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052512 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052517 | orchestrator | 2025-09-29 06:23:50.052522 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-29 06:23:50.052531 | orchestrator | Monday 29 September 2025 06:23:29 +0000 (0:00:10.754) 0:02:47.436 ****** 2025-09-29 06:23:50.052537 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052544 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:23:50.052549 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:23:50.052555 | orchestrator | 2025-09-29 06:23:50.052560 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-29 06:23:50.052566 | orchestrator | Monday 29 September 2025 06:23:40 +0000 (0:00:11.182) 0:02:58.618 ****** 2025-09-29 06:23:50.052572 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:23:50.052578 | orchestrator | 2025-09-29 06:23:50.052584 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:23:50.052591 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-29 06:23:50.052597 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:23:50.052604 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:23:50.052610 | orchestrator | 2025-09-29 06:23:50.052615 | orchestrator | 2025-09-29 06:23:50.052625 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:23:50.052631 | orchestrator | Monday 29 September 2025 06:23:48 +0000 (0:00:07.655) 0:03:06.274 ****** 2025-09-29 06:23:50.052638 | orchestrator | =============================================================================== 2025-09-29 06:23:50.052644 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.72s 2025-09-29 06:23:50.052653 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.89s 2025-09-29 06:23:50.052660 | orchestrator | designate : Restart designate-api container ---------------------------- 12.31s 2025-09-29 06:23:50.052666 | orchestrator | designate : Restart designate-producer container ----------------------- 11.34s 2025-09-29 06:23:50.052671 | orchestrator | designate : Restart designate-worker container ------------------------- 11.18s 2025-09-29 06:23:50.052683 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.75s 2025-09-29 06:23:50.052690 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.33s 2025-09-29 06:23:50.052696 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.66s 2025-09-29 06:23:50.052703 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.07s 2025-09-29 06:23:50.052708 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.00s 2025-09-29 06:23:50.052712 | orchestrator | designate : Copying over config.json files for services ----------------- 6.70s 2025-09-29 06:23:50.052715 | orchestrator | designate : Restart designate-central container ------------------------- 6.13s 2025-09-29 06:23:50.052719 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.95s 2025-09-29 06:23:50.052723 | orchestrator | designate : Check designate containers ---------------------------------- 4.46s 2025-09-29 06:23:50.052726 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.37s 2025-09-29 06:23:50.052730 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.35s 2025-09-29 06:23:50.052734 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.32s 2025-09-29 06:23:50.052738 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.99s 2025-09-29 06:23:50.052741 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.75s 2025-09-29 06:23:50.052745 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.51s 2025-09-29 06:23:50.052749 | orchestrator | 2025-09-29 06:23:50 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:50.052753 | orchestrator | 2025-09-29 06:23:50 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:50.052757 | orchestrator | 2025-09-29 06:23:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:53.103112 | orchestrator | 2025-09-29 06:23:53 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:23:53.103605 | orchestrator | 2025-09-29 06:23:53 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state STARTED 2025-09-29 06:23:53.105336 | orchestrator | 2025-09-29 06:23:53 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:53.105946 | orchestrator | 2025-09-29 06:23:53 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:53.105978 | orchestrator | 2025-09-29 06:23:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:56.132854 | orchestrator | 2025-09-29 06:23:56 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:23:56.133066 | orchestrator | 2025-09-29 06:23:56 | INFO  | Task d6cf6436-f783-4d98-bbe9-df6c0da8f499 is in state SUCCESS 2025-09-29 06:23:56.134764 | orchestrator | 2025-09-29 06:23:56 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:56.136429 | orchestrator | 2025-09-29 06:23:56 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:56.136500 | orchestrator | 2025-09-29 06:23:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:23:59.159875 | orchestrator | 2025-09-29 06:23:59 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:23:59.160322 | orchestrator | 2025-09-29 06:23:59 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:23:59.161306 | orchestrator | 2025-09-29 06:23:59 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:23:59.161780 | orchestrator | 2025-09-29 06:23:59 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:23:59.161849 | orchestrator | 2025-09-29 06:23:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:02.188133 | orchestrator | 2025-09-29 06:24:02 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:02.189716 | orchestrator | 2025-09-29 06:24:02 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:02.190981 | orchestrator | 2025-09-29 06:24:02 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:02.192995 | orchestrator | 2025-09-29 06:24:02 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:02.193065 | orchestrator | 2025-09-29 06:24:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:05.227849 | orchestrator | 2025-09-29 06:24:05 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:05.229585 | orchestrator | 2025-09-29 06:24:05 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:05.231109 | orchestrator | 2025-09-29 06:24:05 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:05.232509 | orchestrator | 2025-09-29 06:24:05 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:05.232564 | orchestrator | 2025-09-29 06:24:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:08.282222 | orchestrator | 2025-09-29 06:24:08 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:08.283935 | orchestrator | 2025-09-29 06:24:08 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:08.285261 | orchestrator | 2025-09-29 06:24:08 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:08.286937 | orchestrator | 2025-09-29 06:24:08 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:08.286986 | orchestrator | 2025-09-29 06:24:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:11.330377 | orchestrator | 2025-09-29 06:24:11 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:11.331531 | orchestrator | 2025-09-29 06:24:11 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:11.333149 | orchestrator | 2025-09-29 06:24:11 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:11.334784 | orchestrator | 2025-09-29 06:24:11 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:11.334922 | orchestrator | 2025-09-29 06:24:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:14.371371 | orchestrator | 2025-09-29 06:24:14 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:14.372660 | orchestrator | 2025-09-29 06:24:14 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:14.374261 | orchestrator | 2025-09-29 06:24:14 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:14.375530 | orchestrator | 2025-09-29 06:24:14 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:14.375584 | orchestrator | 2025-09-29 06:24:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:17.417314 | orchestrator | 2025-09-29 06:24:17 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:17.418510 | orchestrator | 2025-09-29 06:24:17 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:17.420515 | orchestrator | 2025-09-29 06:24:17 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:17.421852 | orchestrator | 2025-09-29 06:24:17 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:17.421889 | orchestrator | 2025-09-29 06:24:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:20.466738 | orchestrator | 2025-09-29 06:24:20 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:20.468126 | orchestrator | 2025-09-29 06:24:20 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:20.471925 | orchestrator | 2025-09-29 06:24:20 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:20.475056 | orchestrator | 2025-09-29 06:24:20 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:20.475108 | orchestrator | 2025-09-29 06:24:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:23.523130 | orchestrator | 2025-09-29 06:24:23 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:23.524250 | orchestrator | 2025-09-29 06:24:23 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:23.526246 | orchestrator | 2025-09-29 06:24:23 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:23.527596 | orchestrator | 2025-09-29 06:24:23 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:23.527686 | orchestrator | 2025-09-29 06:24:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:26.568257 | orchestrator | 2025-09-29 06:24:26 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:26.568382 | orchestrator | 2025-09-29 06:24:26 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:26.569595 | orchestrator | 2025-09-29 06:24:26 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:26.570253 | orchestrator | 2025-09-29 06:24:26 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:26.570280 | orchestrator | 2025-09-29 06:24:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:29.610348 | orchestrator | 2025-09-29 06:24:29 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:29.611503 | orchestrator | 2025-09-29 06:24:29 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:29.613288 | orchestrator | 2025-09-29 06:24:29 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:29.615151 | orchestrator | 2025-09-29 06:24:29 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state STARTED 2025-09-29 06:24:29.617626 | orchestrator | 2025-09-29 06:24:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:32.667626 | orchestrator | 2025-09-29 06:24:32 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:32.670078 | orchestrator | 2025-09-29 06:24:32 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:32.670150 | orchestrator | 2025-09-29 06:24:32 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state STARTED 2025-09-29 06:24:32.670160 | orchestrator | 2025-09-29 06:24:32 | INFO  | Task 1483fe47-2b6a-44b4-b9b6-2e57e588710f is in state SUCCESS 2025-09-29 06:24:32.670169 | orchestrator | 2025-09-29 06:24:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:35.702668 | orchestrator | 2025-09-29 06:24:35 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:35.703651 | orchestrator | 2025-09-29 06:24:35 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:35.705269 | orchestrator | 2025-09-29 06:24:35 | INFO  | Task 3719c9fc-17e8-40b0-8a62-2325c3f7b4dc is in state SUCCESS 2025-09-29 06:24:35.705850 | orchestrator | 2025-09-29 06:24:35.705896 | orchestrator | 2025-09-29 06:24:35.705911 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-29 06:24:35.705924 | orchestrator | 2025-09-29 06:24:35.705942 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-29 06:24:35.705953 | orchestrator | Monday 29 September 2025 06:22:54 +0000 (0:00:00.124) 0:00:00.124 ****** 2025-09-29 06:24:35.705965 | orchestrator | changed: [localhost] 2025-09-29 06:24:35.705977 | orchestrator | 2025-09-29 06:24:35.705989 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-29 06:24:35.706001 | orchestrator | Monday 29 September 2025 06:22:55 +0000 (0:00:00.885) 0:00:01.009 ****** 2025-09-29 06:24:35.706055 | orchestrator | changed: [localhost] 2025-09-29 06:24:35.706072 | orchestrator | 2025-09-29 06:24:35.706084 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-29 06:24:35.706096 | orchestrator | Monday 29 September 2025 06:23:28 +0000 (0:00:33.050) 0:00:34.060 ****** 2025-09-29 06:24:35.706107 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2025-09-29 06:24:35.706131 | orchestrator | changed: [localhost] 2025-09-29 06:24:35.706143 | orchestrator | 2025-09-29 06:24:35.706155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:24:35.706167 | orchestrator | 2025-09-29 06:24:35.706212 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:24:35.706224 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:25.370) 0:00:59.431 ****** 2025-09-29 06:24:35.706237 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:24:35.706249 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:24:35.706260 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:24:35.706270 | orchestrator | 2025-09-29 06:24:35.706281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:24:35.706293 | orchestrator | Monday 29 September 2025 06:23:54 +0000 (0:00:00.424) 0:00:59.855 ****** 2025-09-29 06:24:35.706304 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-29 06:24:35.706317 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-29 06:24:35.706329 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-29 06:24:35.706340 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-29 06:24:35.706353 | orchestrator | 2025-09-29 06:24:35.706365 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-29 06:24:35.706377 | orchestrator | skipping: no hosts matched 2025-09-29 06:24:35.706394 | orchestrator | 2025-09-29 06:24:35.706578 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:24:35.706600 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.706615 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.706659 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.706675 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.706696 | orchestrator | 2025-09-29 06:24:35.706717 | orchestrator | 2025-09-29 06:24:35.706731 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:24:35.706745 | orchestrator | Monday 29 September 2025 06:23:55 +0000 (0:00:01.004) 0:01:00.860 ****** 2025-09-29 06:24:35.706790 | orchestrator | =============================================================================== 2025-09-29 06:24:35.706805 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 33.05s 2025-09-29 06:24:35.706819 | orchestrator | Download ironic-agent kernel ------------------------------------------- 25.37s 2025-09-29 06:24:35.706832 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-09-29 06:24:35.706846 | orchestrator | Ensure the destination directory exists --------------------------------- 0.89s 2025-09-29 06:24:35.706860 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-09-29 06:24:35.706873 | orchestrator | 2025-09-29 06:24:35.706886 | orchestrator | 2025-09-29 06:24:35.706898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:24:35.706911 | orchestrator | 2025-09-29 06:24:35.706924 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:24:35.706937 | orchestrator | Monday 29 September 2025 06:24:00 +0000 (0:00:00.271) 0:00:00.271 ****** 2025-09-29 06:24:35.706950 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:24:35.706963 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:24:35.706977 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:24:35.706990 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:24:35.707002 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:24:35.707016 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:24:35.707029 | orchestrator | ok: [testbed-manager] 2025-09-29 06:24:35.707041 | orchestrator | 2025-09-29 06:24:35.707053 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:24:35.707065 | orchestrator | Monday 29 September 2025 06:24:01 +0000 (0:00:00.668) 0:00:00.940 ****** 2025-09-29 06:24:35.707077 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707101 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707113 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707125 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707137 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707150 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707163 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-29 06:24:35.707176 | orchestrator | 2025-09-29 06:24:35.707205 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-29 06:24:35.707219 | orchestrator | 2025-09-29 06:24:35.707233 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-29 06:24:35.707246 | orchestrator | Monday 29 September 2025 06:24:01 +0000 (0:00:00.614) 0:00:01.555 ****** 2025-09-29 06:24:35.707259 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager 2025-09-29 06:24:35.707272 | orchestrator | 2025-09-29 06:24:35.707284 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-29 06:24:35.707296 | orchestrator | Monday 29 September 2025 06:24:03 +0000 (0:00:01.284) 0:00:02.839 ****** 2025-09-29 06:24:35.707308 | orchestrator | changed: [testbed-node-3] => (item=swift (object-store)) 2025-09-29 06:24:35.707321 | orchestrator | 2025-09-29 06:24:35.707333 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-29 06:24:35.707354 | orchestrator | Monday 29 September 2025 06:24:06 +0000 (0:00:03.244) 0:00:06.084 ****** 2025-09-29 06:24:35.707368 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-29 06:24:35.707382 | orchestrator | changed: [testbed-node-3] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-29 06:24:35.707395 | orchestrator | 2025-09-29 06:24:35.707408 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-29 06:24:35.707416 | orchestrator | Monday 29 September 2025 06:24:12 +0000 (0:00:06.385) 0:00:12.470 ****** 2025-09-29 06:24:35.707432 | orchestrator | ok: [testbed-node-3] => (item=service) 2025-09-29 06:24:35.707439 | orchestrator | 2025-09-29 06:24:35.707450 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-29 06:24:35.707511 | orchestrator | Monday 29 September 2025 06:24:16 +0000 (0:00:03.207) 0:00:15.677 ****** 2025-09-29 06:24:35.707527 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:24:35.707539 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service) 2025-09-29 06:24:35.707550 | orchestrator | 2025-09-29 06:24:35.707562 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-29 06:24:35.707575 | orchestrator | Monday 29 September 2025 06:24:19 +0000 (0:00:03.901) 0:00:19.578 ****** 2025-09-29 06:24:35.707587 | orchestrator | ok: [testbed-node-3] => (item=admin) 2025-09-29 06:24:35.707600 | orchestrator | changed: [testbed-node-3] => (item=ResellerAdmin) 2025-09-29 06:24:35.707613 | orchestrator | 2025-09-29 06:24:35.707625 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-29 06:24:35.707637 | orchestrator | Monday 29 September 2025 06:24:26 +0000 (0:00:06.875) 0:00:26.454 ****** 2025-09-29 06:24:35.707645 | orchestrator | changed: [testbed-node-3] => (item=ceph_rgw -> service -> admin) 2025-09-29 06:24:35.707652 | orchestrator | 2025-09-29 06:24:35.707659 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:24:35.707666 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707674 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707681 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707689 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707696 | orchestrator | testbed-node-3 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707704 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707711 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:24:35.707718 | orchestrator | 2025-09-29 06:24:35.707725 | orchestrator | 2025-09-29 06:24:35.707733 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:24:35.707740 | orchestrator | Monday 29 September 2025 06:24:31 +0000 (0:00:04.788) 0:00:31.243 ****** 2025-09-29 06:24:35.707748 | orchestrator | =============================================================================== 2025-09-29 06:24:35.707755 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.88s 2025-09-29 06:24:35.707762 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.39s 2025-09-29 06:24:35.707769 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.79s 2025-09-29 06:24:35.707776 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.90s 2025-09-29 06:24:35.707784 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.24s 2025-09-29 06:24:35.707791 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.21s 2025-09-29 06:24:35.707798 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.28s 2025-09-29 06:24:35.707805 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2025-09-29 06:24:35.707812 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-09-29 06:24:35.707819 | orchestrator | 2025-09-29 06:24:35.708087 | orchestrator | 2025-09-29 06:24:35.708105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:24:35.708116 | orchestrator | 2025-09-29 06:24:35.708124 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:24:35.708131 | orchestrator | Monday 29 September 2025 06:23:22 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-29 06:24:35.708138 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:24:35.708145 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:24:35.708152 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:24:35.708160 | orchestrator | 2025-09-29 06:24:35.708167 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:24:35.708174 | orchestrator | Monday 29 September 2025 06:23:23 +0000 (0:00:00.299) 0:00:00.570 ****** 2025-09-29 06:24:35.708181 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-29 06:24:35.708188 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-29 06:24:35.708195 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-29 06:24:35.708202 | orchestrator | 2025-09-29 06:24:35.708215 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-29 06:24:35.708223 | orchestrator | 2025-09-29 06:24:35.708230 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-29 06:24:35.708237 | orchestrator | Monday 29 September 2025 06:23:23 +0000 (0:00:00.433) 0:00:01.003 ****** 2025-09-29 06:24:35.708244 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:24:35.708252 | orchestrator | 2025-09-29 06:24:35.708259 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-29 06:24:35.708266 | orchestrator | Monday 29 September 2025 06:23:24 +0000 (0:00:00.543) 0:00:01.547 ****** 2025-09-29 06:24:35.708273 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-29 06:24:35.708280 | orchestrator | 2025-09-29 06:24:35.708288 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-29 06:24:35.708295 | orchestrator | Monday 29 September 2025 06:23:27 +0000 (0:00:03.858) 0:00:05.406 ****** 2025-09-29 06:24:35.708302 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-29 06:24:35.708310 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-29 06:24:35.708317 | orchestrator | 2025-09-29 06:24:35.708324 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-29 06:24:35.708331 | orchestrator | Monday 29 September 2025 06:23:35 +0000 (0:00:07.107) 0:00:12.514 ****** 2025-09-29 06:24:35.708338 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:24:35.708346 | orchestrator | 2025-09-29 06:24:35.708353 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-29 06:24:35.708360 | orchestrator | Monday 29 September 2025 06:23:38 +0000 (0:00:03.529) 0:00:16.044 ****** 2025-09-29 06:24:35.708367 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:24:35.708375 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-29 06:24:35.708382 | orchestrator | 2025-09-29 06:24:35.708389 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-29 06:24:35.708396 | orchestrator | Monday 29 September 2025 06:23:42 +0000 (0:00:04.076) 0:00:20.120 ****** 2025-09-29 06:24:35.708403 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:24:35.708411 | orchestrator | 2025-09-29 06:24:35.708418 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-29 06:24:35.708425 | orchestrator | Monday 29 September 2025 06:23:46 +0000 (0:00:03.475) 0:00:23.596 ****** 2025-09-29 06:24:35.708432 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-29 06:24:35.708439 | orchestrator | 2025-09-29 06:24:35.708447 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-29 06:24:35.708474 | orchestrator | Monday 29 September 2025 06:23:50 +0000 (0:00:04.291) 0:00:27.887 ****** 2025-09-29 06:24:35.708482 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.708489 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:24:35.708496 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:24:35.708504 | orchestrator | 2025-09-29 06:24:35.708511 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-29 06:24:35.708518 | orchestrator | Monday 29 September 2025 06:23:50 +0000 (0:00:00.303) 0:00:28.190 ****** 2025-09-29 06:24:35.708527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708566 | orchestrator | 2025-09-29 06:24:35.708573 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-29 06:24:35.708580 | orchestrator | Monday 29 September 2025 06:23:51 +0000 (0:00:00.887) 0:00:29.078 ****** 2025-09-29 06:24:35.708588 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.708595 | orchestrator | 2025-09-29 06:24:35.708603 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-29 06:24:35.708610 | orchestrator | Monday 29 September 2025 06:23:51 +0000 (0:00:00.145) 0:00:29.224 ****** 2025-09-29 06:24:35.708617 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.708625 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:24:35.708632 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:24:35.708643 | orchestrator | 2025-09-29 06:24:35.708651 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-29 06:24:35.708658 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.467) 0:00:29.691 ****** 2025-09-29 06:24:35.708665 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:24:35.708673 | orchestrator | 2025-09-29 06:24:35.708681 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-29 06:24:35.708688 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.482) 0:00:30.174 ****** 2025-09-29 06:24:35.708695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708733 | orchestrator | 2025-09-29 06:24:35.708741 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-29 06:24:35.708749 | orchestrator | Monday 29 September 2025 06:23:54 +0000 (0:00:01.923) 0:00:32.097 ****** 2025-09-29 06:24:35.708758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708771 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.708780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708789 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:24:35.708801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708810 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:24:35.708818 | orchestrator | 2025-09-29 06:24:35.708826 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-29 06:24:35.708835 | orchestrator | Monday 29 September 2025 06:23:56 +0000 (0:00:01.506) 0:00:33.604 ****** 2025-09-29 06:24:35.708846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708856 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.708864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708877 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:24:35.708886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.708895 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:24:35.708903 | orchestrator | 2025-09-29 06:24:35.708912 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-29 06:24:35.708920 | orchestrator | Monday 29 September 2025 06:23:56 +0000 (0:00:00.824) 0:00:34.429 ****** 2025-09-29 06:24:35.708932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708967 | orchestrator | 2025-09-29 06:24:35.708974 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-29 06:24:35.708982 | orchestrator | Monday 29 September 2025 06:23:58 +0000 (0:00:01.726) 0:00:36.156 ****** 2025-09-29 06:24:35.708989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.708997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.709009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.709017 | orchestrator | 2025-09-29 06:24:35.709024 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-29 06:24:35.709034 | orchestrator | Monday 29 September 2025 06:24:01 +0000 (0:00:02.985) 0:00:39.141 ****** 2025-09-29 06:24:35.709042 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-29 06:24:35.709053 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-29 06:24:35.709061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-29 06:24:35.709068 | orchestrator | 2025-09-29 06:24:35.709075 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-29 06:24:35.709082 | orchestrator | Monday 29 September 2025 06:24:03 +0000 (0:00:01.473) 0:00:40.615 ****** 2025-09-29 06:24:35.709090 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:24:35.709098 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:24:35.709107 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:24:35.709120 | orchestrator | 2025-09-29 06:24:35.709136 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-29 06:24:35.709151 | orchestrator | Monday 29 September 2025 06:24:04 +0000 (0:00:01.264) 0:00:41.879 ****** 2025-09-29 06:24:35.709163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.709176 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:24:35.709189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.709203 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:24:35.709224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-29 06:24:35.709234 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:24:35.709242 | orchestrator | 2025-09-29 06:24:35.709249 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-29 06:24:35.709263 | orchestrator | Monday 29 September 2025 06:24:04 +0000 (0:00:00.498) 0:00:42.377 ****** 2025-09-29 06:24:35.709275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.709283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.709291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-29 06:24:35.709298 | orchestrator | 2025-09-29 06:24:35.709306 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-29 06:24:35.709313 | orchestrator | Monday 29 September 2025 06:24:06 +0000 (0:00:01.133) 0:00:43.511 ****** 2025-09-29 06:24:35.709321 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:24:35.709328 | orchestrator | 2025-09-29 06:24:35.709335 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-29 06:24:35.709343 | orchestrator | Monday 29 September 2025 06:24:08 +0000 (0:00:02.566) 0:00:46.078 ****** 2025-09-29 06:24:35.709351 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:24:35.709358 | orchestrator | 2025-09-29 06:24:35.709366 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-29 06:24:35.709373 | orchestrator | Monday 29 September 2025 06:24:10 +0000 (0:00:02.266) 0:00:48.345 ****** 2025-09-29 06:24:35.709380 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:24:35.709387 | orchestrator | 2025-09-29 06:24:35.709395 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-29 06:24:35.709402 | orchestrator | Monday 29 September 2025 06:24:25 +0000 (0:00:14.994) 0:01:03.340 ****** 2025-09-29 06:24:35.709414 | orchestrator | 2025-09-29 06:24:35.709421 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-29 06:24:35.709428 | orchestrator | Monday 29 September 2025 06:24:25 +0000 (0:00:00.081) 0:01:03.422 ****** 2025-09-29 06:24:35.709436 | orchestrator | 2025-09-29 06:24:35.709447 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-29 06:24:35.709454 | orchestrator | Monday 29 September 2025 06:24:26 +0000 (0:00:00.109) 0:01:03.531 ****** 2025-09-29 06:24:35.709491 | orchestrator | 2025-09-29 06:24:35.709500 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-29 06:24:35.709507 | orchestrator | Monday 29 September 2025 06:24:26 +0000 (0:00:00.086) 0:01:03.617 ****** 2025-09-29 06:24:35.709514 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:24:35.709522 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:24:35.709529 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:24:35.709537 | orchestrator | 2025-09-29 06:24:35.709544 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:24:35.709552 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:24:35.709564 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:24:35.709571 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:24:35.709579 | orchestrator | 2025-09-29 06:24:35.709586 | orchestrator | 2025-09-29 06:24:35.709593 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:24:35.709601 | orchestrator | Monday 29 September 2025 06:24:34 +0000 (0:00:08.434) 0:01:12.052 ****** 2025-09-29 06:24:35.709613 | orchestrator | =============================================================================== 2025-09-29 06:24:35.709632 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.99s 2025-09-29 06:24:35.709646 | orchestrator | placement : Restart placement-api container ----------------------------- 8.43s 2025-09-29 06:24:35.709657 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.11s 2025-09-29 06:24:35.709669 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.29s 2025-09-29 06:24:35.709680 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.08s 2025-09-29 06:24:35.709693 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.86s 2025-09-29 06:24:35.709705 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.53s 2025-09-29 06:24:35.709719 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.48s 2025-09-29 06:24:35.709731 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.98s 2025-09-29 06:24:35.709743 | orchestrator | placement : Creating placement databases -------------------------------- 2.57s 2025-09-29 06:24:35.709751 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.27s 2025-09-29 06:24:35.709758 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.92s 2025-09-29 06:24:35.709765 | orchestrator | placement : Copying over config.json files for services ----------------- 1.73s 2025-09-29 06:24:35.709772 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.51s 2025-09-29 06:24:35.709780 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.47s 2025-09-29 06:24:35.709787 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.26s 2025-09-29 06:24:35.709794 | orchestrator | placement : Check placement containers ---------------------------------- 1.13s 2025-09-29 06:24:35.709801 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.89s 2025-09-29 06:24:35.709809 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.82s 2025-09-29 06:24:35.709823 | orchestrator | placement : include_tasks ----------------------------------------------- 0.54s 2025-09-29 06:24:35.709830 | orchestrator | 2025-09-29 06:24:35 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:35.709837 | orchestrator | 2025-09-29 06:24:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:38.748765 | orchestrator | 2025-09-29 06:24:38 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:38.749187 | orchestrator | 2025-09-29 06:24:38 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:38.750073 | orchestrator | 2025-09-29 06:24:38 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:38.751359 | orchestrator | 2025-09-29 06:24:38 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:38.751440 | orchestrator | 2025-09-29 06:24:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:41.804726 | orchestrator | 2025-09-29 06:24:41 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:41.805888 | orchestrator | 2025-09-29 06:24:41 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:41.807932 | orchestrator | 2025-09-29 06:24:41 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:41.810086 | orchestrator | 2025-09-29 06:24:41 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:41.810156 | orchestrator | 2025-09-29 06:24:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:44.854898 | orchestrator | 2025-09-29 06:24:44 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:44.857503 | orchestrator | 2025-09-29 06:24:44 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:44.860543 | orchestrator | 2025-09-29 06:24:44 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:44.862830 | orchestrator | 2025-09-29 06:24:44 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:44.862884 | orchestrator | 2025-09-29 06:24:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:47.915710 | orchestrator | 2025-09-29 06:24:47 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:47.917001 | orchestrator | 2025-09-29 06:24:47 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:47.919021 | orchestrator | 2025-09-29 06:24:47 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:47.920409 | orchestrator | 2025-09-29 06:24:47 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:47.921001 | orchestrator | 2025-09-29 06:24:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:50.966539 | orchestrator | 2025-09-29 06:24:50 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:50.967854 | orchestrator | 2025-09-29 06:24:50 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:50.969622 | orchestrator | 2025-09-29 06:24:50 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:50.970867 | orchestrator | 2025-09-29 06:24:50 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:50.970897 | orchestrator | 2025-09-29 06:24:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:54.024993 | orchestrator | 2025-09-29 06:24:54 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:54.027994 | orchestrator | 2025-09-29 06:24:54 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:54.030048 | orchestrator | 2025-09-29 06:24:54 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:54.031699 | orchestrator | 2025-09-29 06:24:54 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:54.031758 | orchestrator | 2025-09-29 06:24:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:24:57.074436 | orchestrator | 2025-09-29 06:24:57 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:24:57.075198 | orchestrator | 2025-09-29 06:24:57 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:24:57.076546 | orchestrator | 2025-09-29 06:24:57 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:24:57.078134 | orchestrator | 2025-09-29 06:24:57 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:24:57.078183 | orchestrator | 2025-09-29 06:24:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:00.118666 | orchestrator | 2025-09-29 06:25:00 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:00.120921 | orchestrator | 2025-09-29 06:25:00 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:00.123209 | orchestrator | 2025-09-29 06:25:00 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:00.125861 | orchestrator | 2025-09-29 06:25:00 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:00.125899 | orchestrator | 2025-09-29 06:25:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:03.171535 | orchestrator | 2025-09-29 06:25:03 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:03.172009 | orchestrator | 2025-09-29 06:25:03 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:03.173344 | orchestrator | 2025-09-29 06:25:03 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:03.173914 | orchestrator | 2025-09-29 06:25:03 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:03.173949 | orchestrator | 2025-09-29 06:25:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:06.228289 | orchestrator | 2025-09-29 06:25:06 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:06.229548 | orchestrator | 2025-09-29 06:25:06 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:06.230879 | orchestrator | 2025-09-29 06:25:06 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:06.232311 | orchestrator | 2025-09-29 06:25:06 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:06.232646 | orchestrator | 2025-09-29 06:25:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:09.285868 | orchestrator | 2025-09-29 06:25:09 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:09.289037 | orchestrator | 2025-09-29 06:25:09 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:09.289985 | orchestrator | 2025-09-29 06:25:09 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:09.292084 | orchestrator | 2025-09-29 06:25:09 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:09.292516 | orchestrator | 2025-09-29 06:25:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:12.344133 | orchestrator | 2025-09-29 06:25:12 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:12.344665 | orchestrator | 2025-09-29 06:25:12 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:12.346139 | orchestrator | 2025-09-29 06:25:12 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:12.348872 | orchestrator | 2025-09-29 06:25:12 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:12.348914 | orchestrator | 2025-09-29 06:25:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:15.432617 | orchestrator | 2025-09-29 06:25:15 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:15.436672 | orchestrator | 2025-09-29 06:25:15 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:15.439412 | orchestrator | 2025-09-29 06:25:15 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:15.442547 | orchestrator | 2025-09-29 06:25:15 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:15.442624 | orchestrator | 2025-09-29 06:25:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:18.480787 | orchestrator | 2025-09-29 06:25:18 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:18.480878 | orchestrator | 2025-09-29 06:25:18 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:18.481995 | orchestrator | 2025-09-29 06:25:18 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state STARTED 2025-09-29 06:25:18.482962 | orchestrator | 2025-09-29 06:25:18 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:18.482988 | orchestrator | 2025-09-29 06:25:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:21.507900 | orchestrator | 2025-09-29 06:25:21 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:21.508196 | orchestrator | 2025-09-29 06:25:21 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:21.509261 | orchestrator | 2025-09-29 06:25:21 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:21.511050 | orchestrator | 2025-09-29 06:25:21.511104 | orchestrator | 2025-09-29 06:25:21 | INFO  | Task 6ecc3bd6-1e86-48ef-9a65-7e06253a46dc is in state SUCCESS 2025-09-29 06:25:21.512303 | orchestrator | 2025-09-29 06:25:21.512337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:25:21.512349 | orchestrator | 2025-09-29 06:25:21.512361 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:25:21.512372 | orchestrator | Monday 29 September 2025 06:20:35 +0000 (0:00:00.443) 0:00:00.443 ****** 2025-09-29 06:25:21.512383 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:21.512395 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:21.512406 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:21.512416 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:25:21.512427 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:25:21.512438 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:25:21.512449 | orchestrator | 2025-09-29 06:25:21.512460 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:25:21.512509 | orchestrator | Monday 29 September 2025 06:20:36 +0000 (0:00:00.664) 0:00:01.107 ****** 2025-09-29 06:25:21.512531 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-29 06:25:21.512544 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-29 06:25:21.512704 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-29 06:25:21.512718 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-29 06:25:21.512729 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-29 06:25:21.512740 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-29 06:25:21.512750 | orchestrator | 2025-09-29 06:25:21.512761 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-29 06:25:21.512772 | orchestrator | 2025-09-29 06:25:21.513308 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-29 06:25:21.513336 | orchestrator | Monday 29 September 2025 06:20:37 +0000 (0:00:00.853) 0:00:01.961 ****** 2025-09-29 06:25:21.513349 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:25:21.513361 | orchestrator | 2025-09-29 06:25:21.513372 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-29 06:25:21.513398 | orchestrator | Monday 29 September 2025 06:20:38 +0000 (0:00:01.268) 0:00:03.229 ****** 2025-09-29 06:25:21.513409 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:21.513419 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:21.513441 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:21.513452 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:25:21.513463 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:25:21.513503 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:25:21.513516 | orchestrator | 2025-09-29 06:25:21.513527 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-29 06:25:21.513538 | orchestrator | Monday 29 September 2025 06:20:39 +0000 (0:00:01.193) 0:00:04.423 ****** 2025-09-29 06:25:21.513549 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:21.513559 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:21.513570 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:21.513580 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:25:21.513592 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:25:21.513627 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:25:21.513638 | orchestrator | 2025-09-29 06:25:21.513648 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-29 06:25:21.513659 | orchestrator | Monday 29 September 2025 06:20:40 +0000 (0:00:01.466) 0:00:05.889 ****** 2025-09-29 06:25:21.513670 | orchestrator | ok: [testbed-node-0] => { 2025-09-29 06:25:21.513681 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513692 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513703 | orchestrator | } 2025-09-29 06:25:21.513714 | orchestrator | ok: [testbed-node-1] => { 2025-09-29 06:25:21.513724 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513735 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513745 | orchestrator | } 2025-09-29 06:25:21.513756 | orchestrator | ok: [testbed-node-2] => { 2025-09-29 06:25:21.513766 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513776 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513787 | orchestrator | } 2025-09-29 06:25:21.513797 | orchestrator | ok: [testbed-node-3] => { 2025-09-29 06:25:21.513808 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513818 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513829 | orchestrator | } 2025-09-29 06:25:21.513839 | orchestrator | ok: [testbed-node-4] => { 2025-09-29 06:25:21.513850 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513861 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513871 | orchestrator | } 2025-09-29 06:25:21.513883 | orchestrator | ok: [testbed-node-5] => { 2025-09-29 06:25:21.513901 | orchestrator |  "changed": false, 2025-09-29 06:25:21.513919 | orchestrator |  "msg": "All assertions passed" 2025-09-29 06:25:21.513936 | orchestrator | } 2025-09-29 06:25:21.513956 | orchestrator | 2025-09-29 06:25:21.513975 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-29 06:25:21.514062 | orchestrator | Monday 29 September 2025 06:20:41 +0000 (0:00:00.819) 0:00:06.709 ****** 2025-09-29 06:25:21.514078 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.514089 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.514100 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.514110 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.514121 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.514132 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.514143 | orchestrator | 2025-09-29 06:25:21.514153 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-29 06:25:21.514164 | orchestrator | Monday 29 September 2025 06:20:42 +0000 (0:00:00.512) 0:00:07.221 ****** 2025-09-29 06:25:21.514175 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-29 06:25:21.514186 | orchestrator | 2025-09-29 06:25:21.514197 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-29 06:25:21.514207 | orchestrator | Monday 29 September 2025 06:20:46 +0000 (0:00:03.957) 0:00:11.178 ****** 2025-09-29 06:25:21.514218 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-29 06:25:21.514234 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-29 06:25:21.514253 | orchestrator | 2025-09-29 06:25:21.514339 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-29 06:25:21.514359 | orchestrator | Monday 29 September 2025 06:20:53 +0000 (0:00:06.910) 0:00:18.089 ****** 2025-09-29 06:25:21.514376 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:25:21.514392 | orchestrator | 2025-09-29 06:25:21.514408 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-29 06:25:21.514423 | orchestrator | Monday 29 September 2025 06:20:56 +0000 (0:00:03.676) 0:00:21.765 ****** 2025-09-29 06:25:21.514438 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:25:21.514455 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-29 06:25:21.514470 | orchestrator | 2025-09-29 06:25:21.514530 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-29 06:25:21.514546 | orchestrator | Monday 29 September 2025 06:21:01 +0000 (0:00:04.352) 0:00:26.117 ****** 2025-09-29 06:25:21.514563 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:25:21.514581 | orchestrator | 2025-09-29 06:25:21.514599 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-29 06:25:21.514616 | orchestrator | Monday 29 September 2025 06:21:05 +0000 (0:00:04.251) 0:00:30.369 ****** 2025-09-29 06:25:21.514635 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-29 06:25:21.514653 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-29 06:25:21.514671 | orchestrator | 2025-09-29 06:25:21.514690 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-29 06:25:21.514708 | orchestrator | Monday 29 September 2025 06:21:13 +0000 (0:00:08.099) 0:00:38.468 ****** 2025-09-29 06:25:21.514725 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.514745 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.514757 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.514767 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.514778 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.514789 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.514799 | orchestrator | 2025-09-29 06:25:21.514810 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-29 06:25:21.514830 | orchestrator | Monday 29 September 2025 06:21:14 +0000 (0:00:00.620) 0:00:39.088 ****** 2025-09-29 06:25:21.514841 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.514852 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.514862 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.514873 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.514901 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.514919 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.514936 | orchestrator | 2025-09-29 06:25:21.514956 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-29 06:25:21.514974 | orchestrator | Monday 29 September 2025 06:21:16 +0000 (0:00:02.424) 0:00:41.513 ****** 2025-09-29 06:25:21.514993 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:21.515007 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:21.515018 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:21.515029 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:25:21.515040 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:25:21.515050 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:25:21.515061 | orchestrator | 2025-09-29 06:25:21.515072 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-29 06:25:21.515083 | orchestrator | Monday 29 September 2025 06:21:17 +0000 (0:00:00.915) 0:00:42.428 ****** 2025-09-29 06:25:21.515094 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.515105 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.515115 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.515126 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.515137 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.515147 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.515158 | orchestrator | 2025-09-29 06:25:21.515169 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-29 06:25:21.515179 | orchestrator | Monday 29 September 2025 06:21:19 +0000 (0:00:02.173) 0:00:44.602 ****** 2025-09-29 06:25:21.515194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515344 | orchestrator | 2025-09-29 06:25:21.515355 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-29 06:25:21.515366 | orchestrator | Monday 29 September 2025 06:21:23 +0000 (0:00:04.118) 0:00:48.721 ****** 2025-09-29 06:25:21.515377 | orchestrator | [WARNING]: Skipped 2025-09-29 06:25:21.515388 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-29 06:25:21.515399 | orchestrator | due to this access issue: 2025-09-29 06:25:21.515410 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-29 06:25:21.515421 | orchestrator | a directory 2025-09-29 06:25:21.515432 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:25:21.515442 | orchestrator | 2025-09-29 06:25:21.515453 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-29 06:25:21.515548 | orchestrator | Monday 29 September 2025 06:21:24 +0000 (0:00:00.732) 0:00:49.453 ****** 2025-09-29 06:25:21.515563 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:25:21.515575 | orchestrator | 2025-09-29 06:25:21.515586 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-29 06:25:21.515597 | orchestrator | Monday 29 September 2025 06:21:26 +0000 (0:00:01.622) 0:00:51.075 ****** 2025-09-29 06:25:21.515608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.515722 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.515733 | orchestrator | 2025-09-29 06:25:21.515744 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-29 06:25:21.515755 | orchestrator | Monday 29 September 2025 06:21:29 +0000 (0:00:03.495) 0:00:54.571 ****** 2025-09-29 06:25:21.515771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.515783 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.515795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.515806 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.515818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.515866 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.515880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.515891 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.515907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.515919 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.515930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.515941 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.515952 | orchestrator | 2025-09-29 06:25:21.515963 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-29 06:25:21.515974 | orchestrator | Monday 29 September 2025 06:21:34 +0000 (0:00:04.839) 0:00:59.410 ****** 2025-09-29 06:25:21.515985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.515996 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.516013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516041 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.516051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516062 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516086 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.516096 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516106 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.516116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516132 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.516142 | orchestrator | 2025-09-29 06:25:21.516152 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-29 06:25:21.516161 | orchestrator | Monday 29 September 2025 06:21:37 +0000 (0:00:03.291) 0:01:02.702 ****** 2025-09-29 06:25:21.516171 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516180 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.516190 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.516199 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.516209 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.516218 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.516228 | orchestrator | 2025-09-29 06:25:21.516237 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-29 06:25:21.516255 | orchestrator | Monday 29 September 2025 06:21:40 +0000 (0:00:02.533) 0:01:05.236 ****** 2025-09-29 06:25:21.516265 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516275 | orchestrator | 2025-09-29 06:25:21.516285 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-29 06:25:21.516294 | orchestrator | Monday 29 September 2025 06:21:40 +0000 (0:00:00.153) 0:01:05.389 ****** 2025-09-29 06:25:21.516304 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516313 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.516323 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.516332 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.516341 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.516351 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.516360 | orchestrator | 2025-09-29 06:25:21.516370 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-29 06:25:21.516380 | orchestrator | Monday 29 September 2025 06:21:41 +0000 (0:00:00.721) 0:01:06.111 ****** 2025-09-29 06:25:21.516389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516400 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.516414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516424 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516450 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.516466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516493 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.516503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516513 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.516527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516538 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.516547 | orchestrator | 2025-09-29 06:25:21.516557 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-29 06:25:21.516567 | orchestrator | Monday 29 September 2025 06:21:44 +0000 (0:00:03.431) 0:01:09.542 ****** 2025-09-29 06:25:21.516576 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516663 | orchestrator | 2025-09-29 06:25:21.516673 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-29 06:25:21.516683 | orchestrator | Monday 29 September 2025 06:21:48 +0000 (0:00:04.323) 0:01:13.866 ****** 2025-09-29 06:25:21.516693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.516734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.516771 | orchestrator | 2025-09-29 06:25:21.516792 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-29 06:25:21.516802 | orchestrator | Monday 29 September 2025 06:21:56 +0000 (0:00:07.815) 0:01:21.681 ****** 2025-09-29 06:25:21.516819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516830 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.516844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516854 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.516864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.516887 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.516905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516922 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.516939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516955 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.516981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.516998 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517014 | orchestrator | 2025-09-29 06:25:21.517032 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-29 06:25:21.517050 | orchestrator | Monday 29 September 2025 06:21:59 +0000 (0:00:03.072) 0:01:24.753 ****** 2025-09-29 06:25:21.517066 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517083 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:21.517094 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:25:21.517103 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517113 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517122 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:25:21.517140 | orchestrator | 2025-09-29 06:25:21.517150 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-29 06:25:21.517159 | orchestrator | Monday 29 September 2025 06:22:03 +0000 (0:00:03.617) 0:01:28.371 ****** 2025-09-29 06:25:21.517174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.517185 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517194 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.517204 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.517223 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.517252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.517277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.517287 | orchestrator | 2025-09-29 06:25:21.517297 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-29 06:25:21.517307 | orchestrator | Monday 29 September 2025 06:22:07 +0000 (0:00:04.077) 0:01:32.449 ****** 2025-09-29 06:25:21.517317 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517326 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517336 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517345 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517355 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517365 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517374 | orchestrator | 2025-09-29 06:25:21.517384 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-29 06:25:21.517393 | orchestrator | Monday 29 September 2025 06:22:10 +0000 (0:00:02.986) 0:01:35.435 ****** 2025-09-29 06:25:21.517403 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517412 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517422 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517431 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517441 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517450 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517460 | orchestrator | 2025-09-29 06:25:21.517469 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-29 06:25:21.517499 | orchestrator | Monday 29 September 2025 06:22:13 +0000 (0:00:02.578) 0:01:38.013 ****** 2025-09-29 06:25:21.517509 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517518 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517528 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517537 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517547 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517556 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517566 | orchestrator | 2025-09-29 06:25:21.517576 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-29 06:25:21.517585 | orchestrator | Monday 29 September 2025 06:22:16 +0000 (0:00:02.987) 0:01:41.000 ****** 2025-09-29 06:25:21.517595 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517604 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517614 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517623 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517632 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517642 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517651 | orchestrator | 2025-09-29 06:25:21.517667 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-29 06:25:21.517676 | orchestrator | Monday 29 September 2025 06:22:19 +0000 (0:00:03.021) 0:01:44.022 ****** 2025-09-29 06:25:21.517686 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517696 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517705 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517715 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517730 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517739 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517749 | orchestrator | 2025-09-29 06:25:21.517758 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-29 06:25:21.517768 | orchestrator | Monday 29 September 2025 06:22:21 +0000 (0:00:02.915) 0:01:46.938 ****** 2025-09-29 06:25:21.517778 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517787 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517797 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517806 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.517816 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.517825 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517835 | orchestrator | 2025-09-29 06:25:21.517845 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-29 06:25:21.517854 | orchestrator | Monday 29 September 2025 06:22:24 +0000 (0:00:02.117) 0:01:49.055 ****** 2025-09-29 06:25:21.517864 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.517875 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.517893 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.517909 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.517925 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.517941 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.517956 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.517973 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.517989 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.518007 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518050 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-29 06:25:21.518060 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518070 | orchestrator | 2025-09-29 06:25:21.518085 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-29 06:25:21.518095 | orchestrator | Monday 29 September 2025 06:22:26 +0000 (0:00:02.248) 0:01:51.304 ****** 2025-09-29 06:25:21.518105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518115 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518142 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518169 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518189 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518213 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518239 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518248 | orchestrator | 2025-09-29 06:25:21.518258 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-29 06:25:21.518267 | orchestrator | Monday 29 September 2025 06:22:29 +0000 (0:00:02.989) 0:01:54.294 ****** 2025-09-29 06:25:21.518277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518287 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518313 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518333 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.518364 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518385 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.518395 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518404 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518414 | orchestrator | 2025-09-29 06:25:21.518423 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-29 06:25:21.518433 | orchestrator | Monday 29 September 2025 06:22:31 +0000 (0:00:01.764) 0:01:56.058 ****** 2025-09-29 06:25:21.518442 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518456 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518466 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518624 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518650 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518659 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518669 | orchestrator | 2025-09-29 06:25:21.518679 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-29 06:25:21.518689 | orchestrator | Monday 29 September 2025 06:22:33 +0000 (0:00:01.914) 0:01:57.973 ****** 2025-09-29 06:25:21.518698 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518708 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518717 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518726 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:25:21.518736 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:25:21.518745 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:25:21.518754 | orchestrator | 2025-09-29 06:25:21.518764 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-29 06:25:21.518773 | orchestrator | Monday 29 September 2025 06:22:36 +0000 (0:00:03.538) 0:02:01.512 ****** 2025-09-29 06:25:21.518783 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518792 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518801 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518811 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518820 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518830 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518839 | orchestrator | 2025-09-29 06:25:21.518848 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-29 06:25:21.518858 | orchestrator | Monday 29 September 2025 06:22:39 +0000 (0:00:03.246) 0:02:04.759 ****** 2025-09-29 06:25:21.518867 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.518888 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518897 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518907 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518916 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.518925 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.518935 | orchestrator | 2025-09-29 06:25:21.518944 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-29 06:25:21.518960 | orchestrator | Monday 29 September 2025 06:22:42 +0000 (0:00:03.135) 0:02:07.894 ****** 2025-09-29 06:25:21.518970 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.518979 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.518989 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.518998 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519007 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519017 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519026 | orchestrator | 2025-09-29 06:25:21.519036 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-29 06:25:21.519045 | orchestrator | Monday 29 September 2025 06:22:45 +0000 (0:00:02.053) 0:02:09.948 ****** 2025-09-29 06:25:21.519055 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519063 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519071 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519079 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519086 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519094 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519102 | orchestrator | 2025-09-29 06:25:21.519109 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-29 06:25:21.519117 | orchestrator | Monday 29 September 2025 06:22:48 +0000 (0:00:03.638) 0:02:13.586 ****** 2025-09-29 06:25:21.519125 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519132 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519140 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519148 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519155 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519163 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519171 | orchestrator | 2025-09-29 06:25:21.519178 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-29 06:25:21.519186 | orchestrator | Monday 29 September 2025 06:22:51 +0000 (0:00:03.065) 0:02:16.652 ****** 2025-09-29 06:25:21.519194 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519202 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519210 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519217 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519225 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519233 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519240 | orchestrator | 2025-09-29 06:25:21.519248 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-29 06:25:21.519256 | orchestrator | Monday 29 September 2025 06:22:54 +0000 (0:00:02.366) 0:02:19.018 ****** 2025-09-29 06:25:21.519264 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519271 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519279 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519286 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519294 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519302 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519309 | orchestrator | 2025-09-29 06:25:21.519317 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-29 06:25:21.519325 | orchestrator | Monday 29 September 2025 06:22:55 +0000 (0:00:01.903) 0:02:20.921 ****** 2025-09-29 06:25:21.519333 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519341 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519354 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519362 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519369 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519377 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519385 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519393 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519410 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519418 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519426 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-29 06:25:21.519434 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519441 | orchestrator | 2025-09-29 06:25:21.519449 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-29 06:25:21.519457 | orchestrator | Monday 29 September 2025 06:22:58 +0000 (0:00:02.037) 0:02:22.959 ****** 2025-09-29 06:25:21.519466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.519498 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.519527 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-29 06:25:21.519549 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.519571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.519580 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519588 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-29 06:25:21.519604 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519612 | orchestrator | 2025-09-29 06:25:21.519623 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-29 06:25:21.519631 | orchestrator | Monday 29 September 2025 06:22:59 +0000 (0:00:01.653) 0:02:24.612 ****** 2025-09-29 06:25:21.519640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.519649 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.519666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.519679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.519692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-29 06:25:21.519701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-29 06:25:21.519709 | orchestrator | 2025-09-29 06:25:21.519717 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-29 06:25:21.519725 | orchestrator | Monday 29 September 2025 06:23:03 +0000 (0:00:03.606) 0:02:28.219 ****** 2025-09-29 06:25:21.519738 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:21.519746 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:21.519753 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:21.519761 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:25:21.519769 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:25:21.519777 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:25:21.519784 | orchestrator | 2025-09-29 06:25:21.519792 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-29 06:25:21.519800 | orchestrator | Monday 29 September 2025 06:23:04 +0000 (0:00:00.784) 0:02:29.004 ****** 2025-09-29 06:25:21.519808 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:21.519815 | orchestrator | 2025-09-29 06:25:21.519823 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-29 06:25:21.519831 | orchestrator | Monday 29 September 2025 06:23:06 +0000 (0:00:02.237) 0:02:31.242 ****** 2025-09-29 06:25:21.519839 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:21.519846 | orchestrator | 2025-09-29 06:25:21.519854 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-29 06:25:21.519862 | orchestrator | Monday 29 September 2025 06:23:08 +0000 (0:00:02.391) 0:02:33.634 ****** 2025-09-29 06:25:21.519870 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:21.519878 | orchestrator | 2025-09-29 06:25:21.519886 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.519893 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:43.871) 0:03:17.505 ****** 2025-09-29 06:25:21.519901 | orchestrator | 2025-09-29 06:25:21.519909 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.519917 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.075) 0:03:17.581 ****** 2025-09-29 06:25:21.519924 | orchestrator | 2025-09-29 06:25:21.519932 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.519940 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.263) 0:03:17.845 ****** 2025-09-29 06:25:21.519947 | orchestrator | 2025-09-29 06:25:21.519955 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.519963 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.079) 0:03:17.924 ****** 2025-09-29 06:25:21.519971 | orchestrator | 2025-09-29 06:25:21.519983 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.519991 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:00.072) 0:03:17.996 ****** 2025-09-29 06:25:21.519999 | orchestrator | 2025-09-29 06:25:21.520007 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-29 06:25:21.520015 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:00.065) 0:03:18.062 ****** 2025-09-29 06:25:21.520022 | orchestrator | 2025-09-29 06:25:21.520030 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-29 06:25:21.520038 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:00.075) 0:03:18.138 ****** 2025-09-29 06:25:21.520046 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:21.520054 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:25:21.520061 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:25:21.520069 | orchestrator | 2025-09-29 06:25:21.520077 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-29 06:25:21.520085 | orchestrator | Monday 29 September 2025 06:24:24 +0000 (0:00:31.344) 0:03:49.482 ****** 2025-09-29 06:25:21.520092 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:25:21.520100 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:25:21.520108 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:25:21.520115 | orchestrator | 2025-09-29 06:25:21.520123 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:25:21.520131 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:25:21.520145 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-29 06:25:21.520153 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-29 06:25:21.520165 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:25:21.520173 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:25:21.520181 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-29 06:25:21.520189 | orchestrator | 2025-09-29 06:25:21.520197 | orchestrator | 2025-09-29 06:25:21.520205 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:25:21.520213 | orchestrator | Monday 29 September 2025 06:25:19 +0000 (0:00:54.491) 0:04:43.974 ****** 2025-09-29 06:25:21.520220 | orchestrator | =============================================================================== 2025-09-29 06:25:21.520228 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 54.49s 2025-09-29 06:25:21.520236 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.87s 2025-09-29 06:25:21.520244 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.34s 2025-09-29 06:25:21.520251 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.10s 2025-09-29 06:25:21.520259 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.82s 2025-09-29 06:25:21.520267 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.91s 2025-09-29 06:25:21.520275 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 4.84s 2025-09-29 06:25:21.520283 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.35s 2025-09-29 06:25:21.520290 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.32s 2025-09-29 06:25:21.520298 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.25s 2025-09-29 06:25:21.520306 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.12s 2025-09-29 06:25:21.520314 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.08s 2025-09-29 06:25:21.520321 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.96s 2025-09-29 06:25:21.520329 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.68s 2025-09-29 06:25:21.520337 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.64s 2025-09-29 06:25:21.520345 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.62s 2025-09-29 06:25:21.520352 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.61s 2025-09-29 06:25:21.520360 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.54s 2025-09-29 06:25:21.520368 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.50s 2025-09-29 06:25:21.520376 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.43s 2025-09-29 06:25:21.520384 | orchestrator | 2025-09-29 06:25:21 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:21.520392 | orchestrator | 2025-09-29 06:25:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:24.533173 | orchestrator | 2025-09-29 06:25:24 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:24.536106 | orchestrator | 2025-09-29 06:25:24 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:24.537636 | orchestrator | 2025-09-29 06:25:24 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:24.540774 | orchestrator | 2025-09-29 06:25:24 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:24.541098 | orchestrator | 2025-09-29 06:25:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:27.562080 | orchestrator | 2025-09-29 06:25:27 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:27.563787 | orchestrator | 2025-09-29 06:25:27 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:27.564418 | orchestrator | 2025-09-29 06:25:27 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:27.564955 | orchestrator | 2025-09-29 06:25:27 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:27.564982 | orchestrator | 2025-09-29 06:25:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:30.588320 | orchestrator | 2025-09-29 06:25:30 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:30.588579 | orchestrator | 2025-09-29 06:25:30 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:30.589965 | orchestrator | 2025-09-29 06:25:30 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:30.590608 | orchestrator | 2025-09-29 06:25:30 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:30.590657 | orchestrator | 2025-09-29 06:25:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:33.614527 | orchestrator | 2025-09-29 06:25:33 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:33.614819 | orchestrator | 2025-09-29 06:25:33 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:33.615758 | orchestrator | 2025-09-29 06:25:33 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:33.616657 | orchestrator | 2025-09-29 06:25:33 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:33.616711 | orchestrator | 2025-09-29 06:25:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:36.648142 | orchestrator | 2025-09-29 06:25:36 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:36.649666 | orchestrator | 2025-09-29 06:25:36 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:36.650339 | orchestrator | 2025-09-29 06:25:36 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:36.650861 | orchestrator | 2025-09-29 06:25:36 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:36.651040 | orchestrator | 2025-09-29 06:25:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:39.681307 | orchestrator | 2025-09-29 06:25:39 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:39.681744 | orchestrator | 2025-09-29 06:25:39 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:39.682686 | orchestrator | 2025-09-29 06:25:39 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:39.683355 | orchestrator | 2025-09-29 06:25:39 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:39.683396 | orchestrator | 2025-09-29 06:25:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:42.713067 | orchestrator | 2025-09-29 06:25:42 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:42.713449 | orchestrator | 2025-09-29 06:25:42 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:42.714303 | orchestrator | 2025-09-29 06:25:42 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:42.714990 | orchestrator | 2025-09-29 06:25:42 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:42.715025 | orchestrator | 2025-09-29 06:25:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:45.739420 | orchestrator | 2025-09-29 06:25:45 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:45.739709 | orchestrator | 2025-09-29 06:25:45 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:45.740796 | orchestrator | 2025-09-29 06:25:45 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:45.741239 | orchestrator | 2025-09-29 06:25:45 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:45.741442 | orchestrator | 2025-09-29 06:25:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:48.765201 | orchestrator | 2025-09-29 06:25:48 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state STARTED 2025-09-29 06:25:48.765748 | orchestrator | 2025-09-29 06:25:48 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:48.766616 | orchestrator | 2025-09-29 06:25:48 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:48.767176 | orchestrator | 2025-09-29 06:25:48 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:48.767219 | orchestrator | 2025-09-29 06:25:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:51.788220 | orchestrator | 2025-09-29 06:25:51 | INFO  | Task e808bc58-47d6-429c-b3f7-70d6c273e1b9 is in state SUCCESS 2025-09-29 06:25:51.789019 | orchestrator | 2025-09-29 06:25:51.789053 | orchestrator | 2025-09-29 06:25:51.789059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:25:51.789065 | orchestrator | 2025-09-29 06:25:51.789069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:25:51.789074 | orchestrator | Monday 29 September 2025 06:23:52 +0000 (0:00:00.263) 0:00:00.263 ****** 2025-09-29 06:25:51.789078 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:51.789083 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:51.789087 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:51.789091 | orchestrator | 2025-09-29 06:25:51.789095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:25:51.789112 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:00.393) 0:00:00.656 ****** 2025-09-29 06:25:51.789116 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-29 06:25:51.789120 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-29 06:25:51.789124 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-29 06:25:51.789128 | orchestrator | 2025-09-29 06:25:51.789131 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-29 06:25:51.789135 | orchestrator | 2025-09-29 06:25:51.789139 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-29 06:25:51.789143 | orchestrator | Monday 29 September 2025 06:23:53 +0000 (0:00:00.741) 0:00:01.398 ****** 2025-09-29 06:25:51.789146 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:25:51.789151 | orchestrator | 2025-09-29 06:25:51.789154 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-29 06:25:51.789158 | orchestrator | Monday 29 September 2025 06:23:55 +0000 (0:00:01.158) 0:00:02.556 ****** 2025-09-29 06:25:51.789177 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-29 06:25:51.789182 | orchestrator | 2025-09-29 06:25:51.789185 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-29 06:25:51.789189 | orchestrator | Monday 29 September 2025 06:23:58 +0000 (0:00:03.677) 0:00:06.234 ****** 2025-09-29 06:25:51.789193 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-29 06:25:51.789197 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-29 06:25:51.789201 | orchestrator | 2025-09-29 06:25:51.789204 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-29 06:25:51.789208 | orchestrator | Monday 29 September 2025 06:24:05 +0000 (0:00:06.828) 0:00:13.063 ****** 2025-09-29 06:25:51.789212 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:25:51.789216 | orchestrator | 2025-09-29 06:25:51.789220 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-29 06:25:51.789224 | orchestrator | Monday 29 September 2025 06:24:09 +0000 (0:00:03.527) 0:00:16.591 ****** 2025-09-29 06:25:51.789228 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:25:51.789232 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-29 06:25:51.789236 | orchestrator | 2025-09-29 06:25:51.789239 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-29 06:25:51.789243 | orchestrator | Monday 29 September 2025 06:24:13 +0000 (0:00:04.068) 0:00:20.660 ****** 2025-09-29 06:25:51.789247 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:25:51.789251 | orchestrator | 2025-09-29 06:25:51.789255 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-29 06:25:51.789258 | orchestrator | Monday 29 September 2025 06:24:16 +0000 (0:00:03.425) 0:00:24.085 ****** 2025-09-29 06:25:51.789262 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-29 06:25:51.789266 | orchestrator | 2025-09-29 06:25:51.789270 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-29 06:25:51.789273 | orchestrator | Monday 29 September 2025 06:24:20 +0000 (0:00:04.311) 0:00:28.396 ****** 2025-09-29 06:25:51.789277 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.789281 | orchestrator | 2025-09-29 06:25:51.789285 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-29 06:25:51.789288 | orchestrator | Monday 29 September 2025 06:24:24 +0000 (0:00:03.370) 0:00:31.767 ****** 2025-09-29 06:25:51.789292 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.789296 | orchestrator | 2025-09-29 06:25:51.789312 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-29 06:25:51.789316 | orchestrator | Monday 29 September 2025 06:24:28 +0000 (0:00:04.325) 0:00:36.093 ****** 2025-09-29 06:25:51.789320 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.789323 | orchestrator | 2025-09-29 06:25:51.789333 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-29 06:25:51.789337 | orchestrator | Monday 29 September 2025 06:24:33 +0000 (0:00:04.456) 0:00:40.549 ****** 2025-09-29 06:25:51.789352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789395 | orchestrator | 2025-09-29 06:25:51.789399 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-29 06:25:51.789403 | orchestrator | Monday 29 September 2025 06:24:34 +0000 (0:00:01.427) 0:00:41.977 ****** 2025-09-29 06:25:51.789407 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.789411 | orchestrator | 2025-09-29 06:25:51.789415 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-29 06:25:51.789421 | orchestrator | Monday 29 September 2025 06:24:34 +0000 (0:00:00.140) 0:00:42.117 ****** 2025-09-29 06:25:51.789425 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.789428 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:51.789539 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:51.789546 | orchestrator | 2025-09-29 06:25:51.789552 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-29 06:25:51.789559 | orchestrator | Monday 29 September 2025 06:24:35 +0000 (0:00:00.385) 0:00:42.503 ****** 2025-09-29 06:25:51.789564 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:25:51.789570 | orchestrator | 2025-09-29 06:25:51.789576 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-29 06:25:51.789582 | orchestrator | Monday 29 September 2025 06:24:35 +0000 (0:00:00.797) 0:00:43.301 ****** 2025-09-29 06:25:51.789589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789843 | orchestrator | 2025-09-29 06:25:51.789847 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-29 06:25:51.789851 | orchestrator | Monday 29 September 2025 06:24:38 +0000 (0:00:02.443) 0:00:45.744 ****** 2025-09-29 06:25:51.789855 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:25:51.789859 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:25:51.789863 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:25:51.789867 | orchestrator | 2025-09-29 06:25:51.789870 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-29 06:25:51.789874 | orchestrator | Monday 29 September 2025 06:24:38 +0000 (0:00:00.259) 0:00:46.004 ****** 2025-09-29 06:25:51.789878 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:25:51.789882 | orchestrator | 2025-09-29 06:25:51.789886 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-29 06:25:51.789890 | orchestrator | Monday 29 September 2025 06:24:39 +0000 (0:00:00.638) 0:00:46.642 ****** 2025-09-29 06:25:51.789894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.789919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.789934 | orchestrator | 2025-09-29 06:25:51.789938 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-29 06:25:51.789942 | orchestrator | Monday 29 September 2025 06:24:41 +0000 (0:00:02.384) 0:00:49.027 ****** 2025-09-29 06:25:51.789950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.789956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.789960 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.789964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.789968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.789979 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:51.789983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790095 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:51.790100 | orchestrator | 2025-09-29 06:25:51.790106 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-29 06:25:51.790113 | orchestrator | Monday 29 September 2025 06:24:42 +0000 (0:00:00.616) 0:00:49.643 ****** 2025-09-29 06:25:51.790122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790134 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.790140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790161 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:51.790176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790188 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:51.790194 | orchestrator | 2025-09-29 06:25:51.790199 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-29 06:25:51.790205 | orchestrator | Monday 29 September 2025 06:24:43 +0000 (0:00:00.992) 0:00:50.636 ****** 2025-09-29 06:25:51.790213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790262 | orchestrator | 2025-09-29 06:25:51.790268 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-29 06:25:51.790274 | orchestrator | Monday 29 September 2025 06:24:45 +0000 (0:00:02.437) 0:00:53.073 ****** 2025-09-29 06:25:51.790280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790332 | orchestrator | 2025-09-29 06:25:51.790337 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-29 06:25:51.790343 | orchestrator | Monday 29 September 2025 06:24:50 +0000 (0:00:04.762) 0:00:57.836 ****** 2025-09-29 06:25:51.790354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790365 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.790369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790380 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:51.790384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-29 06:25:51.790391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:25:51.790395 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:51.790398 | orchestrator | 2025-09-29 06:25:51.790402 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-29 06:25:51.790406 | orchestrator | Monday 29 September 2025 06:24:50 +0000 (0:00:00.581) 0:00:58.417 ****** 2025-09-29 06:25:51.790412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-29 06:25:51.790429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:25:51.790450 | orchestrator | 2025-09-29 06:25:51.790454 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-29 06:25:51.790458 | orchestrator | Monday 29 September 2025 06:24:53 +0000 (0:00:02.391) 0:01:00.808 ****** 2025-09-29 06:25:51.790462 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:25:51.790465 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:25:51.790469 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:25:51.790473 | orchestrator | 2025-09-29 06:25:51.790476 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-29 06:25:51.790512 | orchestrator | Monday 29 September 2025 06:24:53 +0000 (0:00:00.297) 0:01:01.105 ****** 2025-09-29 06:25:51.790516 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.790519 | orchestrator | 2025-09-29 06:25:51.790523 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-29 06:25:51.790527 | orchestrator | Monday 29 September 2025 06:24:56 +0000 (0:00:02.570) 0:01:03.676 ****** 2025-09-29 06:25:51.790530 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.790534 | orchestrator | 2025-09-29 06:25:51.790538 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-29 06:25:51.790542 | orchestrator | Monday 29 September 2025 06:24:58 +0000 (0:00:02.448) 0:01:06.124 ****** 2025-09-29 06:25:51.790545 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.790549 | orchestrator | 2025-09-29 06:25:51.790552 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-29 06:25:51.790556 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:18.687) 0:01:24.811 ****** 2025-09-29 06:25:51.790560 | orchestrator | 2025-09-29 06:25:51.790563 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-29 06:25:51.790567 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:00.069) 0:01:24.881 ****** 2025-09-29 06:25:51.790571 | orchestrator | 2025-09-29 06:25:51.790575 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-29 06:25:51.790578 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:00.072) 0:01:24.954 ****** 2025-09-29 06:25:51.790582 | orchestrator | 2025-09-29 06:25:51.790586 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-29 06:25:51.790589 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:00.069) 0:01:25.024 ****** 2025-09-29 06:25:51.790593 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.790597 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:25:51.790600 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:25:51.790604 | orchestrator | 2025-09-29 06:25:51.790608 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-29 06:25:51.790612 | orchestrator | Monday 29 September 2025 06:25:34 +0000 (0:00:16.697) 0:01:41.721 ****** 2025-09-29 06:25:51.790615 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:25:51.790619 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:25:51.790623 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:25:51.790626 | orchestrator | 2025-09-29 06:25:51.790630 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:25:51.790634 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-29 06:25:51.790639 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:25:51.790642 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-29 06:25:51.790649 | orchestrator | 2025-09-29 06:25:51.790716 | orchestrator | 2025-09-29 06:25:51.790724 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:25:51.790730 | orchestrator | Monday 29 September 2025 06:25:49 +0000 (0:00:15.296) 0:01:57.017 ****** 2025-09-29 06:25:51.790737 | orchestrator | =============================================================================== 2025-09-29 06:25:51.790743 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.69s 2025-09-29 06:25:51.790753 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.70s 2025-09-29 06:25:51.790759 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.30s 2025-09-29 06:25:51.790765 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.83s 2025-09-29 06:25:51.790768 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.76s 2025-09-29 06:25:51.790772 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.46s 2025-09-29 06:25:51.790778 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.33s 2025-09-29 06:25:51.790789 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.31s 2025-09-29 06:25:51.790799 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2025-09-29 06:25:51.790804 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.68s 2025-09-29 06:25:51.790810 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.53s 2025-09-29 06:25:51.790816 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.43s 2025-09-29 06:25:51.790821 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.37s 2025-09-29 06:25:51.790826 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.57s 2025-09-29 06:25:51.790833 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.45s 2025-09-29 06:25:51.790838 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.44s 2025-09-29 06:25:51.790844 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.44s 2025-09-29 06:25:51.790850 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.39s 2025-09-29 06:25:51.790855 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2025-09-29 06:25:51.790861 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.43s 2025-09-29 06:25:51.790868 | orchestrator | 2025-09-29 06:25:51 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:25:51.790874 | orchestrator | 2025-09-29 06:25:51 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:51.790880 | orchestrator | 2025-09-29 06:25:51 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:51.791454 | orchestrator | 2025-09-29 06:25:51 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:51.791467 | orchestrator | 2025-09-29 06:25:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:54.811407 | orchestrator | 2025-09-29 06:25:54 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:25:54.811682 | orchestrator | 2025-09-29 06:25:54 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:54.812842 | orchestrator | 2025-09-29 06:25:54 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:54.813474 | orchestrator | 2025-09-29 06:25:54 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:54.813634 | orchestrator | 2025-09-29 06:25:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:25:57.836081 | orchestrator | 2025-09-29 06:25:57 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:25:57.836199 | orchestrator | 2025-09-29 06:25:57 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:25:57.836721 | orchestrator | 2025-09-29 06:25:57 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:25:57.837336 | orchestrator | 2025-09-29 06:25:57 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:25:57.837363 | orchestrator | 2025-09-29 06:25:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:00.858220 | orchestrator | 2025-09-29 06:26:00 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:00.858293 | orchestrator | 2025-09-29 06:26:00 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:00.858299 | orchestrator | 2025-09-29 06:26:00 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:00.858474 | orchestrator | 2025-09-29 06:26:00 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:00.858558 | orchestrator | 2025-09-29 06:26:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:03.878104 | orchestrator | 2025-09-29 06:26:03 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:03.878270 | orchestrator | 2025-09-29 06:26:03 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:03.878975 | orchestrator | 2025-09-29 06:26:03 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:03.879671 | orchestrator | 2025-09-29 06:26:03 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:03.879688 | orchestrator | 2025-09-29 06:26:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:06.903460 | orchestrator | 2025-09-29 06:26:06 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:06.903749 | orchestrator | 2025-09-29 06:26:06 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:06.904888 | orchestrator | 2025-09-29 06:26:06 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:06.905636 | orchestrator | 2025-09-29 06:26:06 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:06.905693 | orchestrator | 2025-09-29 06:26:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:09.939195 | orchestrator | 2025-09-29 06:26:09 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:09.941227 | orchestrator | 2025-09-29 06:26:09 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:09.943126 | orchestrator | 2025-09-29 06:26:09 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:09.944769 | orchestrator | 2025-09-29 06:26:09 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:09.944841 | orchestrator | 2025-09-29 06:26:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:12.984099 | orchestrator | 2025-09-29 06:26:12 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:12.984209 | orchestrator | 2025-09-29 06:26:12 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:12.987146 | orchestrator | 2025-09-29 06:26:12 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:12.987811 | orchestrator | 2025-09-29 06:26:12 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:12.987905 | orchestrator | 2025-09-29 06:26:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:16.023737 | orchestrator | 2025-09-29 06:26:16 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:16.025446 | orchestrator | 2025-09-29 06:26:16 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:16.033368 | orchestrator | 2025-09-29 06:26:16 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:16.033452 | orchestrator | 2025-09-29 06:26:16 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:16.033462 | orchestrator | 2025-09-29 06:26:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:19.067150 | orchestrator | 2025-09-29 06:26:19 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:19.069637 | orchestrator | 2025-09-29 06:26:19 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:19.071739 | orchestrator | 2025-09-29 06:26:19 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:19.073850 | orchestrator | 2025-09-29 06:26:19 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:19.073918 | orchestrator | 2025-09-29 06:26:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:22.111128 | orchestrator | 2025-09-29 06:26:22 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:22.112840 | orchestrator | 2025-09-29 06:26:22 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:22.114525 | orchestrator | 2025-09-29 06:26:22 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:22.116051 | orchestrator | 2025-09-29 06:26:22 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:22.116088 | orchestrator | 2025-09-29 06:26:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:25.152849 | orchestrator | 2025-09-29 06:26:25 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:25.154584 | orchestrator | 2025-09-29 06:26:25 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:25.157162 | orchestrator | 2025-09-29 06:26:25 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:25.159607 | orchestrator | 2025-09-29 06:26:25 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:25.159669 | orchestrator | 2025-09-29 06:26:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:28.193615 | orchestrator | 2025-09-29 06:26:28 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:28.193842 | orchestrator | 2025-09-29 06:26:28 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:28.194818 | orchestrator | 2025-09-29 06:26:28 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:28.195806 | orchestrator | 2025-09-29 06:26:28 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:28.195910 | orchestrator | 2025-09-29 06:26:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:31.235834 | orchestrator | 2025-09-29 06:26:31 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:31.236132 | orchestrator | 2025-09-29 06:26:31 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:31.237384 | orchestrator | 2025-09-29 06:26:31 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:31.238313 | orchestrator | 2025-09-29 06:26:31 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:31.238447 | orchestrator | 2025-09-29 06:26:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:34.283877 | orchestrator | 2025-09-29 06:26:34 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:34.286241 | orchestrator | 2025-09-29 06:26:34 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:34.289388 | orchestrator | 2025-09-29 06:26:34 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:34.291271 | orchestrator | 2025-09-29 06:26:34 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:34.291429 | orchestrator | 2025-09-29 06:26:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:37.355827 | orchestrator | 2025-09-29 06:26:37 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:37.356121 | orchestrator | 2025-09-29 06:26:37 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:37.358634 | orchestrator | 2025-09-29 06:26:37 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:37.361108 | orchestrator | 2025-09-29 06:26:37 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:37.361186 | orchestrator | 2025-09-29 06:26:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:40.402740 | orchestrator | 2025-09-29 06:26:40 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:40.403195 | orchestrator | 2025-09-29 06:26:40 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:40.404425 | orchestrator | 2025-09-29 06:26:40 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:40.405217 | orchestrator | 2025-09-29 06:26:40 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:40.405253 | orchestrator | 2025-09-29 06:26:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:43.452931 | orchestrator | 2025-09-29 06:26:43 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:43.456385 | orchestrator | 2025-09-29 06:26:43 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:43.459992 | orchestrator | 2025-09-29 06:26:43 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:43.462411 | orchestrator | 2025-09-29 06:26:43 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:43.462900 | orchestrator | 2025-09-29 06:26:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:46.497898 | orchestrator | 2025-09-29 06:26:46 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:46.498112 | orchestrator | 2025-09-29 06:26:46 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:46.501345 | orchestrator | 2025-09-29 06:26:46 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:46.503314 | orchestrator | 2025-09-29 06:26:46 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:46.503351 | orchestrator | 2025-09-29 06:26:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:49.553643 | orchestrator | 2025-09-29 06:26:49 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:49.554787 | orchestrator | 2025-09-29 06:26:49 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:49.556947 | orchestrator | 2025-09-29 06:26:49 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:49.559772 | orchestrator | 2025-09-29 06:26:49 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:49.559798 | orchestrator | 2025-09-29 06:26:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:52.588103 | orchestrator | 2025-09-29 06:26:52 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:52.588638 | orchestrator | 2025-09-29 06:26:52 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:52.589167 | orchestrator | 2025-09-29 06:26:52 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:52.589999 | orchestrator | 2025-09-29 06:26:52 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:52.590086 | orchestrator | 2025-09-29 06:26:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:55.625735 | orchestrator | 2025-09-29 06:26:55 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:55.626146 | orchestrator | 2025-09-29 06:26:55 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:55.626931 | orchestrator | 2025-09-29 06:26:55 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:55.627963 | orchestrator | 2025-09-29 06:26:55 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:55.627991 | orchestrator | 2025-09-29 06:26:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:26:58.653215 | orchestrator | 2025-09-29 06:26:58 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:26:58.655226 | orchestrator | 2025-09-29 06:26:58 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:26:58.657035 | orchestrator | 2025-09-29 06:26:58 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:26:58.658694 | orchestrator | 2025-09-29 06:26:58 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:26:58.658743 | orchestrator | 2025-09-29 06:26:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:01.696597 | orchestrator | 2025-09-29 06:27:01 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:01.699320 | orchestrator | 2025-09-29 06:27:01 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:01.700853 | orchestrator | 2025-09-29 06:27:01 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:01.702971 | orchestrator | 2025-09-29 06:27:01 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:01.703009 | orchestrator | 2025-09-29 06:27:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:04.736300 | orchestrator | 2025-09-29 06:27:04 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:04.736673 | orchestrator | 2025-09-29 06:27:04 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:04.737656 | orchestrator | 2025-09-29 06:27:04 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:04.738453 | orchestrator | 2025-09-29 06:27:04 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:04.738621 | orchestrator | 2025-09-29 06:27:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:07.793923 | orchestrator | 2025-09-29 06:27:07 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:07.794048 | orchestrator | 2025-09-29 06:27:07 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:07.795169 | orchestrator | 2025-09-29 06:27:07 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:07.795900 | orchestrator | 2025-09-29 06:27:07 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:07.795948 | orchestrator | 2025-09-29 06:27:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:10.837915 | orchestrator | 2025-09-29 06:27:10 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:10.838605 | orchestrator | 2025-09-29 06:27:10 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:10.839742 | orchestrator | 2025-09-29 06:27:10 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:10.841554 | orchestrator | 2025-09-29 06:27:10 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:10.841602 | orchestrator | 2025-09-29 06:27:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:13.882575 | orchestrator | 2025-09-29 06:27:13 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:13.884626 | orchestrator | 2025-09-29 06:27:13 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:13.886360 | orchestrator | 2025-09-29 06:27:13 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:13.888018 | orchestrator | 2025-09-29 06:27:13 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:13.888057 | orchestrator | 2025-09-29 06:27:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:16.929006 | orchestrator | 2025-09-29 06:27:16 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:16.930234 | orchestrator | 2025-09-29 06:27:16 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:16.931712 | orchestrator | 2025-09-29 06:27:16 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:16.933327 | orchestrator | 2025-09-29 06:27:16 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:16.933364 | orchestrator | 2025-09-29 06:27:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:19.973281 | orchestrator | 2025-09-29 06:27:19 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:19.975126 | orchestrator | 2025-09-29 06:27:19 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:19.976695 | orchestrator | 2025-09-29 06:27:19 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:19.978441 | orchestrator | 2025-09-29 06:27:19 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:19.978560 | orchestrator | 2025-09-29 06:27:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:23.014625 | orchestrator | 2025-09-29 06:27:23 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:23.016331 | orchestrator | 2025-09-29 06:27:23 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:23.017642 | orchestrator | 2025-09-29 06:27:23 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:23.020045 | orchestrator | 2025-09-29 06:27:23 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state STARTED 2025-09-29 06:27:23.020144 | orchestrator | 2025-09-29 06:27:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:26.072005 | orchestrator | 2025-09-29 06:27:26 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:26.073712 | orchestrator | 2025-09-29 06:27:26 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:26.075402 | orchestrator | 2025-09-29 06:27:26 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:26.076639 | orchestrator | 2025-09-29 06:27:26 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:26.078323 | orchestrator | 2025-09-29 06:27:26 | INFO  | Task 0526a8eb-e758-4a8d-80de-a3399af4a0a2 is in state SUCCESS 2025-09-29 06:27:26.078400 | orchestrator | 2025-09-29 06:27:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:26.079686 | orchestrator | 2025-09-29 06:27:26.079739 | orchestrator | 2025-09-29 06:27:26.079748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:27:26.079755 | orchestrator | 2025-09-29 06:27:26.079762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:27:26.079769 | orchestrator | Monday 29 September 2025 06:24:37 +0000 (0:00:00.241) 0:00:00.241 ****** 2025-09-29 06:27:26.079775 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:27:26.079782 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:27:26.079788 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:27:26.079794 | orchestrator | 2025-09-29 06:27:26.079800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:27:26.079807 | orchestrator | Monday 29 September 2025 06:24:37 +0000 (0:00:00.256) 0:00:00.498 ****** 2025-09-29 06:27:26.079813 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-29 06:27:26.079819 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-29 06:27:26.079825 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-29 06:27:26.079831 | orchestrator | 2025-09-29 06:27:26.079837 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-29 06:27:26.079842 | orchestrator | 2025-09-29 06:27:26.079848 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-29 06:27:26.079868 | orchestrator | Monday 29 September 2025 06:24:37 +0000 (0:00:00.392) 0:00:00.891 ****** 2025-09-29 06:27:26.079874 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:27:26.079880 | orchestrator | 2025-09-29 06:27:26.079886 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-29 06:27:26.079891 | orchestrator | Monday 29 September 2025 06:24:38 +0000 (0:00:00.409) 0:00:01.300 ****** 2025-09-29 06:27:26.079897 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-29 06:27:26.079903 | orchestrator | 2025-09-29 06:27:26.079909 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-29 06:27:26.079914 | orchestrator | Monday 29 September 2025 06:24:42 +0000 (0:00:03.893) 0:00:05.194 ****** 2025-09-29 06:27:26.079920 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-29 06:27:26.079926 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-29 06:27:26.079932 | orchestrator | 2025-09-29 06:27:26.079938 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-29 06:27:26.079944 | orchestrator | Monday 29 September 2025 06:24:49 +0000 (0:00:07.317) 0:00:12.511 ****** 2025-09-29 06:27:26.079950 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:27:26.079956 | orchestrator | 2025-09-29 06:27:26.079962 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-29 06:27:26.079985 | orchestrator | Monday 29 September 2025 06:24:52 +0000 (0:00:03.592) 0:00:16.104 ****** 2025-09-29 06:27:26.079991 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:27:26.079997 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-29 06:27:26.080003 | orchestrator | 2025-09-29 06:27:26.080009 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-29 06:27:26.080014 | orchestrator | Monday 29 September 2025 06:24:57 +0000 (0:00:04.250) 0:00:20.354 ****** 2025-09-29 06:27:26.080020 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:27:26.080025 | orchestrator | 2025-09-29 06:27:26.080031 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-29 06:27:26.080037 | orchestrator | Monday 29 September 2025 06:25:00 +0000 (0:00:03.603) 0:00:23.958 ****** 2025-09-29 06:27:26.080042 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-29 06:27:26.080048 | orchestrator | 2025-09-29 06:27:26.080054 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-29 06:27:26.080059 | orchestrator | Monday 29 September 2025 06:25:05 +0000 (0:00:04.356) 0:00:28.314 ****** 2025-09-29 06:27:26.080082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080170 | orchestrator | 2025-09-29 06:27:26.080180 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-29 06:27:26.080192 | orchestrator | Monday 29 September 2025 06:25:08 +0000 (0:00:03.068) 0:00:31.383 ****** 2025-09-29 06:27:26.080198 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:27:26.080204 | orchestrator | 2025-09-29 06:27:26.080216 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-29 06:27:26.080222 | orchestrator | Monday 29 September 2025 06:25:08 +0000 (0:00:00.671) 0:00:32.054 ****** 2025-09-29 06:27:26.080228 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.080234 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:27:26.080239 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:27:26.080245 | orchestrator | 2025-09-29 06:27:26.080251 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-29 06:27:26.080257 | orchestrator | Monday 29 September 2025 06:25:12 +0000 (0:00:03.323) 0:00:35.378 ****** 2025-09-29 06:27:26.080262 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080280 | orchestrator | 2025-09-29 06:27:26.080287 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-29 06:27:26.080303 | orchestrator | Monday 29 September 2025 06:25:14 +0000 (0:00:01.959) 0:00:37.338 ****** 2025-09-29 06:27:26.080310 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080317 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:26.080331 | orchestrator | 2025-09-29 06:27:26.080338 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-29 06:27:26.080345 | orchestrator | Monday 29 September 2025 06:25:15 +0000 (0:00:01.419) 0:00:38.758 ****** 2025-09-29 06:27:26.080351 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:27:26.080358 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:27:26.080365 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:27:26.080372 | orchestrator | 2025-09-29 06:27:26.080378 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-29 06:27:26.080385 | orchestrator | Monday 29 September 2025 06:25:16 +0000 (0:00:00.682) 0:00:39.441 ****** 2025-09-29 06:27:26.080392 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.080398 | orchestrator | 2025-09-29 06:27:26.080405 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-29 06:27:26.080412 | orchestrator | Monday 29 September 2025 06:25:16 +0000 (0:00:00.312) 0:00:39.753 ****** 2025-09-29 06:27:26.080419 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.080426 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.080433 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.080440 | orchestrator | 2025-09-29 06:27:26.080447 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-29 06:27:26.080453 | orchestrator | Monday 29 September 2025 06:25:16 +0000 (0:00:00.283) 0:00:40.036 ****** 2025-09-29 06:27:26.080460 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:27:26.080467 | orchestrator | 2025-09-29 06:27:26.080474 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-29 06:27:26.080527 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:00.541) 0:00:40.578 ****** 2025-09-29 06:27:26.080542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080576 | orchestrator | 2025-09-29 06:27:26.080584 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-29 06:27:26.080590 | orchestrator | Monday 29 September 2025 06:25:24 +0000 (0:00:06.713) 0:00:47.291 ****** 2025-09-29 06:27:26.080605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080617 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.080624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080633 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.080650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080667 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.080677 | orchestrator | 2025-09-29 06:27:26.080688 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-29 06:27:26.080698 | orchestrator | Monday 29 September 2025 06:25:27 +0000 (0:00:03.376) 0:00:50.668 ****** 2025-09-29 06:27:26.080713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080723 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.080734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080746 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.080809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-29 06:27:26.080822 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.080833 | orchestrator | 2025-09-29 06:27:26.080842 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-29 06:27:26.080851 | orchestrator | Monday 29 September 2025 06:25:30 +0000 (0:00:02.593) 0:00:53.262 ****** 2025-09-29 06:27:26.080860 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.080866 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.080872 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.080878 | orchestrator | 2025-09-29 06:27:26.080884 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-29 06:27:26.080889 | orchestrator | Monday 29 September 2025 06:25:33 +0000 (0:00:03.063) 0:00:56.325 ****** 2025-09-29 06:27:26.080901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.080936 | orchestrator | 2025-09-29 06:27:26.080942 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-29 06:27:26.080949 | orchestrator | Monday 29 September 2025 06:25:39 +0000 (0:00:06.021) 0:01:02.347 ****** 2025-09-29 06:27:26.080959 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.080967 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:27:26.080978 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:27:26.080988 | orchestrator | 2025-09-29 06:27:26.080998 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-29 06:27:26.081007 | orchestrator | Monday 29 September 2025 06:25:45 +0000 (0:00:06.285) 0:01:08.633 ****** 2025-09-29 06:27:26.081017 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081024 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081030 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081036 | orchestrator | 2025-09-29 06:27:26.081042 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-29 06:27:26.081051 | orchestrator | Monday 29 September 2025 06:25:50 +0000 (0:00:05.485) 0:01:14.118 ****** 2025-09-29 06:27:26.081058 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081064 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081070 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081075 | orchestrator | 2025-09-29 06:27:26.081081 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-29 06:27:26.081090 | orchestrator | Monday 29 September 2025 06:25:54 +0000 (0:00:03.361) 0:01:17.479 ****** 2025-09-29 06:27:26.081099 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081113 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081123 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081131 | orchestrator | 2025-09-29 06:27:26.081140 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-29 06:27:26.081149 | orchestrator | Monday 29 September 2025 06:25:58 +0000 (0:00:04.159) 0:01:21.639 ****** 2025-09-29 06:27:26.081159 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081167 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081176 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081185 | orchestrator | 2025-09-29 06:27:26.081194 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-29 06:27:26.081203 | orchestrator | Monday 29 September 2025 06:26:01 +0000 (0:00:02.976) 0:01:24.616 ****** 2025-09-29 06:27:26.081218 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081228 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081237 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081247 | orchestrator | 2025-09-29 06:27:26.081256 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-29 06:27:26.081266 | orchestrator | Monday 29 September 2025 06:26:01 +0000 (0:00:00.267) 0:01:24.884 ****** 2025-09-29 06:27:26.081276 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-29 06:27:26.081285 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081291 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-29 06:27:26.081296 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081302 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-29 06:27:26.081308 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081314 | orchestrator | 2025-09-29 06:27:26.081320 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-29 06:27:26.081326 | orchestrator | Monday 29 September 2025 06:26:04 +0000 (0:00:02.769) 0:01:27.653 ****** 2025-09-29 06:27:26.081333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.081358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.081365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-29 06:27:26.081377 | orchestrator | 2025-09-29 06:27:26.081383 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-29 06:27:26.081388 | orchestrator | Monday 29 September 2025 06:26:08 +0000 (0:00:03.570) 0:01:31.223 ****** 2025-09-29 06:27:26.081394 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:26.081400 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:26.081406 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:26.081411 | orchestrator | 2025-09-29 06:27:26.081420 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-29 06:27:26.081429 | orchestrator | Monday 29 September 2025 06:26:08 +0000 (0:00:00.253) 0:01:31.477 ****** 2025-09-29 06:27:26.081438 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081449 | orchestrator | 2025-09-29 06:27:26.081458 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-29 06:27:26.081468 | orchestrator | Monday 29 September 2025 06:26:10 +0000 (0:00:02.289) 0:01:33.766 ****** 2025-09-29 06:27:26.081503 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081514 | orchestrator | 2025-09-29 06:27:26.081525 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-29 06:27:26.081535 | orchestrator | Monday 29 September 2025 06:26:13 +0000 (0:00:02.441) 0:01:36.208 ****** 2025-09-29 06:27:26.081544 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081555 | orchestrator | 2025-09-29 06:27:26.081561 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-29 06:27:26.081567 | orchestrator | Monday 29 September 2025 06:26:15 +0000 (0:00:02.278) 0:01:38.486 ****** 2025-09-29 06:27:26.081573 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081579 | orchestrator | 2025-09-29 06:27:26.081586 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-29 06:27:26.081592 | orchestrator | Monday 29 September 2025 06:26:46 +0000 (0:00:30.746) 0:02:09.233 ****** 2025-09-29 06:27:26.081598 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081603 | orchestrator | 2025-09-29 06:27:26.081615 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-29 06:27:26.081622 | orchestrator | Monday 29 September 2025 06:26:48 +0000 (0:00:02.269) 0:02:11.503 ****** 2025-09-29 06:27:26.081627 | orchestrator | 2025-09-29 06:27:26.081633 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-29 06:27:26.081639 | orchestrator | Monday 29 September 2025 06:26:48 +0000 (0:00:00.056) 0:02:11.559 ****** 2025-09-29 06:27:26.081645 | orchestrator | 2025-09-29 06:27:26.081651 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-29 06:27:26.081657 | orchestrator | Monday 29 September 2025 06:26:48 +0000 (0:00:00.059) 0:02:11.618 ****** 2025-09-29 06:27:26.081663 | orchestrator | 2025-09-29 06:27:26.081669 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-29 06:27:26.081674 | orchestrator | Monday 29 September 2025 06:26:48 +0000 (0:00:00.061) 0:02:11.680 ****** 2025-09-29 06:27:26.081687 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:26.081693 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:27:26.081700 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:27:26.081706 | orchestrator | 2025-09-29 06:27:26.081712 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:27:26.081724 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-29 06:27:26.081731 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-29 06:27:26.081738 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-29 06:27:26.081743 | orchestrator | 2025-09-29 06:27:26.081749 | orchestrator | 2025-09-29 06:27:26.081755 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:27:26.081761 | orchestrator | Monday 29 September 2025 06:27:23 +0000 (0:00:35.412) 0:02:47.092 ****** 2025-09-29 06:27:26.081767 | orchestrator | =============================================================================== 2025-09-29 06:27:26.081773 | orchestrator | glance : Restart glance-api container ---------------------------------- 35.41s 2025-09-29 06:27:26.081779 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.75s 2025-09-29 06:27:26.081785 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.32s 2025-09-29 06:27:26.081791 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.71s 2025-09-29 06:27:26.081797 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.29s 2025-09-29 06:27:26.081803 | orchestrator | glance : Copying over config.json files for services -------------------- 6.02s 2025-09-29 06:27:26.081809 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.49s 2025-09-29 06:27:26.081815 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.36s 2025-09-29 06:27:26.081821 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.25s 2025-09-29 06:27:26.081827 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.16s 2025-09-29 06:27:26.081832 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.89s 2025-09-29 06:27:26.081838 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.60s 2025-09-29 06:27:26.081844 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.59s 2025-09-29 06:27:26.081850 | orchestrator | glance : Check glance containers ---------------------------------------- 3.57s 2025-09-29 06:27:26.081856 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.38s 2025-09-29 06:27:26.081862 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.36s 2025-09-29 06:27:26.081867 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.32s 2025-09-29 06:27:26.081873 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.07s 2025-09-29 06:27:26.081879 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.06s 2025-09-29 06:27:26.081885 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 2.98s 2025-09-29 06:27:29.120015 | orchestrator | 2025-09-29 06:27:29 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:29.121941 | orchestrator | 2025-09-29 06:27:29 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:29.123743 | orchestrator | 2025-09-29 06:27:29 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:29.125445 | orchestrator | 2025-09-29 06:27:29 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:29.125553 | orchestrator | 2025-09-29 06:27:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:32.172202 | orchestrator | 2025-09-29 06:27:32 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:32.172908 | orchestrator | 2025-09-29 06:27:32 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:32.174211 | orchestrator | 2025-09-29 06:27:32 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:32.175167 | orchestrator | 2025-09-29 06:27:32 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:32.175373 | orchestrator | 2025-09-29 06:27:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:35.216919 | orchestrator | 2025-09-29 06:27:35 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:35.218352 | orchestrator | 2025-09-29 06:27:35 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:35.219772 | orchestrator | 2025-09-29 06:27:35 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:35.221658 | orchestrator | 2025-09-29 06:27:35 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:35.221705 | orchestrator | 2025-09-29 06:27:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:38.259657 | orchestrator | 2025-09-29 06:27:38 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:38.260571 | orchestrator | 2025-09-29 06:27:38 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:38.261596 | orchestrator | 2025-09-29 06:27:38 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:38.262942 | orchestrator | 2025-09-29 06:27:38 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:38.262977 | orchestrator | 2025-09-29 06:27:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:41.316707 | orchestrator | 2025-09-29 06:27:41 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:41.318445 | orchestrator | 2025-09-29 06:27:41 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:41.320992 | orchestrator | 2025-09-29 06:27:41 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:41.322963 | orchestrator | 2025-09-29 06:27:41 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:41.323015 | orchestrator | 2025-09-29 06:27:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:44.370911 | orchestrator | 2025-09-29 06:27:44 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:44.374190 | orchestrator | 2025-09-29 06:27:44 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:44.376211 | orchestrator | 2025-09-29 06:27:44 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:44.377722 | orchestrator | 2025-09-29 06:27:44 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:44.377753 | orchestrator | 2025-09-29 06:27:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:47.412654 | orchestrator | 2025-09-29 06:27:47 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:47.414523 | orchestrator | 2025-09-29 06:27:47 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:47.417324 | orchestrator | 2025-09-29 06:27:47 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:47.419023 | orchestrator | 2025-09-29 06:27:47 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:47.419109 | orchestrator | 2025-09-29 06:27:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:50.460146 | orchestrator | 2025-09-29 06:27:50 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:50.461179 | orchestrator | 2025-09-29 06:27:50 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:50.462936 | orchestrator | 2025-09-29 06:27:50 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:50.464023 | orchestrator | 2025-09-29 06:27:50 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state STARTED 2025-09-29 06:27:50.464063 | orchestrator | 2025-09-29 06:27:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:53.510157 | orchestrator | 2025-09-29 06:27:53 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:27:53.511096 | orchestrator | 2025-09-29 06:27:53 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:53.512319 | orchestrator | 2025-09-29 06:27:53 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:53.513641 | orchestrator | 2025-09-29 06:27:53 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:53.516213 | orchestrator | 2025-09-29 06:27:53 | INFO  | Task 97bead5f-0f56-414f-b054-605d549f9f00 is in state SUCCESS 2025-09-29 06:27:53.518187 | orchestrator | 2025-09-29 06:27:53.518254 | orchestrator | 2025-09-29 06:27:53.518270 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:27:53.518283 | orchestrator | 2025-09-29 06:27:53.518295 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:27:53.518306 | orchestrator | Monday 29 September 2025 06:24:38 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-09-29 06:27:53.518318 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:27:53.518331 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:27:53.518343 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:27:53.518355 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:27:53.518366 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:27:53.518377 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:27:53.518389 | orchestrator | 2025-09-29 06:27:53.518400 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:27:53.518412 | orchestrator | Monday 29 September 2025 06:24:39 +0000 (0:00:00.601) 0:00:00.836 ****** 2025-09-29 06:27:53.518423 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-29 06:27:53.518435 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-29 06:27:53.518466 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-29 06:27:53.518515 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-29 06:27:53.518528 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-29 06:27:53.518539 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-29 06:27:53.518550 | orchestrator | 2025-09-29 06:27:53.518562 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-29 06:27:53.518571 | orchestrator | 2025-09-29 06:27:53.518582 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-29 06:27:53.518594 | orchestrator | Monday 29 September 2025 06:24:39 +0000 (0:00:00.498) 0:00:01.335 ****** 2025-09-29 06:27:53.518632 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:27:53.518646 | orchestrator | 2025-09-29 06:27:53.518658 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-29 06:27:53.518697 | orchestrator | Monday 29 September 2025 06:24:40 +0000 (0:00:01.076) 0:00:02.412 ****** 2025-09-29 06:27:53.518710 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-29 06:27:53.518721 | orchestrator | 2025-09-29 06:27:53.518733 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-29 06:27:53.518745 | orchestrator | Monday 29 September 2025 06:24:44 +0000 (0:00:03.785) 0:00:06.197 ****** 2025-09-29 06:27:53.518756 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-29 06:27:53.518769 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-29 06:27:53.518780 | orchestrator | 2025-09-29 06:27:53.518792 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-29 06:27:53.518803 | orchestrator | Monday 29 September 2025 06:24:51 +0000 (0:00:07.139) 0:00:13.337 ****** 2025-09-29 06:27:53.518815 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:27:53.518828 | orchestrator | 2025-09-29 06:27:53.518839 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-29 06:27:53.518850 | orchestrator | Monday 29 September 2025 06:24:55 +0000 (0:00:03.598) 0:00:16.936 ****** 2025-09-29 06:27:53.518861 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:27:53.518872 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-29 06:27:53.518884 | orchestrator | 2025-09-29 06:27:53.518895 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-29 06:27:53.518906 | orchestrator | Monday 29 September 2025 06:24:59 +0000 (0:00:04.130) 0:00:21.066 ****** 2025-09-29 06:27:53.518917 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:27:53.518929 | orchestrator | 2025-09-29 06:27:53.518940 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-29 06:27:53.518950 | orchestrator | Monday 29 September 2025 06:25:03 +0000 (0:00:03.884) 0:00:24.950 ****** 2025-09-29 06:27:53.518960 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-29 06:27:53.518971 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-29 06:27:53.518982 | orchestrator | 2025-09-29 06:27:53.518993 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-29 06:27:53.519005 | orchestrator | Monday 29 September 2025 06:25:11 +0000 (0:00:08.459) 0:00:33.409 ****** 2025-09-29 06:27:53.519019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.519078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.519104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.519116 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519178 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.519241 | orchestrator | 2025-09-29 06:27:53.519258 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-29 06:27:53.519269 | orchestrator | Monday 29 September 2025 06:25:14 +0000 (0:00:02.463) 0:00:35.872 ****** 2025-09-29 06:27:53.519280 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.519298 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.519309 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.519319 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.519329 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.519340 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.519351 | orchestrator | 2025-09-29 06:27:53.519362 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-29 06:27:53.519372 | orchestrator | Monday 29 September 2025 06:25:14 +0000 (0:00:00.650) 0:00:36.523 ****** 2025-09-29 06:27:53.519382 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.519392 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.519402 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.519419 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:27:53.519431 | orchestrator | 2025-09-29 06:27:53.519441 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-29 06:27:53.519452 | orchestrator | Monday 29 September 2025 06:25:15 +0000 (0:00:01.018) 0:00:37.541 ****** 2025-09-29 06:27:53.519463 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-29 06:27:53.519530 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-29 06:27:53.519545 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-29 06:27:53.519556 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-29 06:27:53.519566 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-29 06:27:53.519576 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-29 06:27:53.519586 | orchestrator | 2025-09-29 06:27:53.519596 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-29 06:27:53.519607 | orchestrator | Monday 29 September 2025 06:25:17 +0000 (0:00:01.729) 0:00:39.271 ****** 2025-09-29 06:27:53.519620 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519633 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519644 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519675 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519692 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519702 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-29 06:27:53.519712 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519723 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519751 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519763 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519776 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519787 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-29 06:27:53.519798 | orchestrator | 2025-09-29 06:27:53.519808 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-29 06:27:53.519819 | orchestrator | Monday 29 September 2025 06:25:22 +0000 (0:00:05.090) 0:00:44.361 ****** 2025-09-29 06:27:53.519836 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:53.519847 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:53.519857 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-29 06:27:53.519867 | orchestrator | 2025-09-29 06:27:53.519877 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-29 06:27:53.519887 | orchestrator | Monday 29 September 2025 06:25:24 +0000 (0:00:01.948) 0:00:46.310 ****** 2025-09-29 06:27:53.519897 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-29 06:27:53.519906 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-29 06:27:53.519916 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-29 06:27:53.519926 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:27:53.519937 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:27:53.519949 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-29 06:27:53.519956 | orchestrator | 2025-09-29 06:27:53.519962 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-29 06:27:53.519968 | orchestrator | Monday 29 September 2025 06:25:28 +0000 (0:00:03.297) 0:00:49.608 ****** 2025-09-29 06:27:53.519974 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-29 06:27:53.519980 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-29 06:27:53.519986 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-29 06:27:53.519992 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-29 06:27:53.519999 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-29 06:27:53.520005 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-29 06:27:53.520011 | orchestrator | 2025-09-29 06:27:53.520017 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-29 06:27:53.520023 | orchestrator | Monday 29 September 2025 06:25:29 +0000 (0:00:01.083) 0:00:50.691 ****** 2025-09-29 06:27:53.520033 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.520040 | orchestrator | 2025-09-29 06:27:53.520046 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-29 06:27:53.520052 | orchestrator | Monday 29 September 2025 06:25:29 +0000 (0:00:00.108) 0:00:50.800 ****** 2025-09-29 06:27:53.520058 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.520064 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.520070 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.520077 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.520087 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.520097 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.520107 | orchestrator | 2025-09-29 06:27:53.520117 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-29 06:27:53.520127 | orchestrator | Monday 29 September 2025 06:25:29 +0000 (0:00:00.630) 0:00:51.430 ****** 2025-09-29 06:27:53.520138 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:27:53.520150 | orchestrator | 2025-09-29 06:27:53.520161 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-29 06:27:53.520172 | orchestrator | Monday 29 September 2025 06:25:30 +0000 (0:00:01.042) 0:00:52.473 ****** 2025-09-29 06:27:53.520183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520293 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520305 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520311 | orchestrator | 2025-09-29 06:27:53.520317 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-29 06:27:53.520324 | orchestrator | Monday 29 September 2025 06:25:34 +0000 (0:00:03.125) 0:00:55.599 ****** 2025-09-29 06:27:53.520331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520372 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.520379 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.520385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520398 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.520405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520422 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.520431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520449 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.520456 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520469 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.520495 | orchestrator | 2025-09-29 06:27:53.520506 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-29 06:27:53.520517 | orchestrator | Monday 29 September 2025 06:25:36 +0000 (0:00:02.736) 0:00:58.336 ****** 2025-09-29 06:27:53.520539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520570 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.520577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520583 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.520589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.520602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520608 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.520618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520635 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.520642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520654 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.520665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.520685 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.520691 | orchestrator | 2025-09-29 06:27:53.520698 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-29 06:27:53.520704 | orchestrator | Monday 29 September 2025 06:25:38 +0000 (0:00:01.972) 0:01:00.309 ****** 2025-09-29 06:27:53.520710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.520735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.520856 | orchestrator | 2025-09-29 06:27:53.520866 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-29 06:27:53.520876 | orchestrator | Monday 29 September 2025 06:25:41 +0000 (0:00:03.143) 0:01:03.452 ****** 2025-09-29 06:27:53.520886 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-29 06:27:53.520896 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-29 06:27:53.520906 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.520917 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.520927 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-29 06:27:53.520937 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.520948 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-29 06:27:53.520958 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-29 06:27:53.520969 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-29 06:27:53.520978 | orchestrator | 2025-09-29 06:27:53.520984 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-29 06:27:53.520990 | orchestrator | Monday 29 September 2025 06:25:44 +0000 (0:00:02.135) 0:01:05.588 ****** 2025-09-29 06:27:53.520996 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521034 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521203 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521214 | orchestrator | 2025-09-29 06:27:53.521221 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-29 06:27:53.521227 | orchestrator | Monday 29 September 2025 06:25:52 +0000 (0:00:08.281) 0:01:13.870 ****** 2025-09-29 06:27:53.521238 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.521244 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.521250 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.521257 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:27:53.521263 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:27:53.521269 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:27:53.521275 | orchestrator | 2025-09-29 06:27:53.521281 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-29 06:27:53.521288 | orchestrator | Monday 29 September 2025 06:25:54 +0000 (0:00:02.234) 0:01:16.104 ****** 2025-09-29 06:27:53.521297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.521304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521311 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.521318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.521329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-29 06:27:53.521350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521356 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.521362 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.521369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521382 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.521388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521406 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.521419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521426 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-29 06:27:53.521433 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.521439 | orchestrator | 2025-09-29 06:27:53.521446 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-29 06:27:53.521452 | orchestrator | Monday 29 September 2025 06:25:56 +0000 (0:00:01.952) 0:01:18.056 ****** 2025-09-29 06:27:53.521458 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.521465 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.521471 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.521503 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.521510 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.521516 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.521522 | orchestrator | 2025-09-29 06:27:53.521529 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-29 06:27:53.521535 | orchestrator | Monday 29 September 2025 06:25:57 +0000 (0:00:00.629) 0:01:18.686 ****** 2025-09-29 06:27:53.521547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-29 06:27:53.521597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:27:53.521659 | orchestrator | 2025-09-29 06:27:53.521666 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-29 06:27:53.521672 | orchestrator | Monday 29 September 2025 06:25:59 +0000 (0:00:02.640) 0:01:21.326 ****** 2025-09-29 06:27:53.521678 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.521684 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:27:53.521691 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:27:53.521698 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:27:53.521704 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:27:53.521710 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:27:53.521716 | orchestrator | 2025-09-29 06:27:53.521723 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-29 06:27:53.521729 | orchestrator | Monday 29 September 2025 06:26:00 +0000 (0:00:00.558) 0:01:21.884 ****** 2025-09-29 06:27:53.521740 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:53.521750 | orchestrator | 2025-09-29 06:27:53.521766 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-29 06:27:53.521779 | orchestrator | Monday 29 September 2025 06:26:03 +0000 (0:00:02.692) 0:01:24.576 ****** 2025-09-29 06:27:53.521788 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:53.521798 | orchestrator | 2025-09-29 06:27:53.521808 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-29 06:27:53.521818 | orchestrator | Monday 29 September 2025 06:26:05 +0000 (0:00:02.662) 0:01:27.239 ****** 2025-09-29 06:27:53.521829 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:53.521839 | orchestrator | 2025-09-29 06:27:53.521850 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.521860 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:21.543) 0:01:48.782 ****** 2025-09-29 06:27:53.521870 | orchestrator | 2025-09-29 06:27:53.521887 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.521899 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.064) 0:01:48.847 ****** 2025-09-29 06:27:53.521909 | orchestrator | 2025-09-29 06:27:53.521921 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.521931 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.057) 0:01:48.905 ****** 2025-09-29 06:27:53.521943 | orchestrator | 2025-09-29 06:27:53.521949 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.521959 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.063) 0:01:48.968 ****** 2025-09-29 06:27:53.521972 | orchestrator | 2025-09-29 06:27:53.521987 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.521997 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.060) 0:01:49.029 ****** 2025-09-29 06:27:53.522047 | orchestrator | 2025-09-29 06:27:53.522059 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-29 06:27:53.522075 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.059) 0:01:49.088 ****** 2025-09-29 06:27:53.522089 | orchestrator | 2025-09-29 06:27:53.522100 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-29 06:27:53.522111 | orchestrator | Monday 29 September 2025 06:26:27 +0000 (0:00:00.062) 0:01:49.151 ****** 2025-09-29 06:27:53.522123 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:53.522135 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:27:53.522146 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:27:53.522157 | orchestrator | 2025-09-29 06:27:53.522169 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-29 06:27:53.522180 | orchestrator | Monday 29 September 2025 06:26:55 +0000 (0:00:28.127) 0:02:17.278 ****** 2025-09-29 06:27:53.522193 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:27:53.522202 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:27:53.522214 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:27:53.522225 | orchestrator | 2025-09-29 06:27:53.522236 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-29 06:27:53.522248 | orchestrator | Monday 29 September 2025 06:27:01 +0000 (0:00:06.031) 0:02:23.309 ****** 2025-09-29 06:27:53.522255 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:27:53.522261 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:27:53.522267 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:27:53.522274 | orchestrator | 2025-09-29 06:27:53.522280 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-29 06:27:53.522286 | orchestrator | Monday 29 September 2025 06:27:40 +0000 (0:00:38.639) 0:03:01.949 ****** 2025-09-29 06:27:53.522292 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:27:53.522298 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:27:53.522304 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:27:53.522311 | orchestrator | 2025-09-29 06:27:53.522317 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-29 06:27:53.522323 | orchestrator | Monday 29 September 2025 06:27:50 +0000 (0:00:10.482) 0:03:12.431 ****** 2025-09-29 06:27:53.522329 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:27:53.522336 | orchestrator | 2025-09-29 06:27:53.522342 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:27:53.522348 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-29 06:27:53.522356 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-29 06:27:53.522363 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-29 06:27:53.522369 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-29 06:27:53.522376 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-29 06:27:53.522382 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-29 06:27:53.522388 | orchestrator | 2025-09-29 06:27:53.522394 | orchestrator | 2025-09-29 06:27:53.522401 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:27:53.522407 | orchestrator | Monday 29 September 2025 06:27:51 +0000 (0:00:00.531) 0:03:12.963 ****** 2025-09-29 06:27:53.522413 | orchestrator | =============================================================================== 2025-09-29 06:27:53.522419 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 38.64s 2025-09-29 06:27:53.522433 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 28.13s 2025-09-29 06:27:53.522439 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.54s 2025-09-29 06:27:53.522445 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.48s 2025-09-29 06:27:53.522452 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.46s 2025-09-29 06:27:53.522458 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.28s 2025-09-29 06:27:53.522464 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.14s 2025-09-29 06:27:53.522470 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.03s 2025-09-29 06:27:53.522505 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.09s 2025-09-29 06:27:53.522513 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.13s 2025-09-29 06:27:53.522519 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.88s 2025-09-29 06:27:53.522526 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.79s 2025-09-29 06:27:53.522532 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.60s 2025-09-29 06:27:53.522538 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.30s 2025-09-29 06:27:53.522544 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.14s 2025-09-29 06:27:53.522550 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.13s 2025-09-29 06:27:53.522556 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 2.74s 2025-09-29 06:27:53.522567 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.69s 2025-09-29 06:27:53.522573 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.66s 2025-09-29 06:27:53.522579 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.64s 2025-09-29 06:27:53.522586 | orchestrator | 2025-09-29 06:27:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:56.566792 | orchestrator | 2025-09-29 06:27:56 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:27:56.568392 | orchestrator | 2025-09-29 06:27:56 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:56.569993 | orchestrator | 2025-09-29 06:27:56 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:56.571953 | orchestrator | 2025-09-29 06:27:56 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:56.572029 | orchestrator | 2025-09-29 06:27:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:27:59.626419 | orchestrator | 2025-09-29 06:27:59 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:27:59.629340 | orchestrator | 2025-09-29 06:27:59 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:27:59.631859 | orchestrator | 2025-09-29 06:27:59 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:27:59.633735 | orchestrator | 2025-09-29 06:27:59 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:27:59.633779 | orchestrator | 2025-09-29 06:27:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:02.676941 | orchestrator | 2025-09-29 06:28:02 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:02.679831 | orchestrator | 2025-09-29 06:28:02 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:02.682394 | orchestrator | 2025-09-29 06:28:02 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:02.685067 | orchestrator | 2025-09-29 06:28:02 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:02.685121 | orchestrator | 2025-09-29 06:28:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:05.732433 | orchestrator | 2025-09-29 06:28:05 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:05.733247 | orchestrator | 2025-09-29 06:28:05 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:05.733900 | orchestrator | 2025-09-29 06:28:05 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:05.735060 | orchestrator | 2025-09-29 06:28:05 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:05.735107 | orchestrator | 2025-09-29 06:28:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:08.781156 | orchestrator | 2025-09-29 06:28:08 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:08.783255 | orchestrator | 2025-09-29 06:28:08 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:08.783744 | orchestrator | 2025-09-29 06:28:08 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:08.785157 | orchestrator | 2025-09-29 06:28:08 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:08.785245 | orchestrator | 2025-09-29 06:28:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:11.835423 | orchestrator | 2025-09-29 06:28:11 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:11.837187 | orchestrator | 2025-09-29 06:28:11 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:11.839912 | orchestrator | 2025-09-29 06:28:11 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:11.841072 | orchestrator | 2025-09-29 06:28:11 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:11.841417 | orchestrator | 2025-09-29 06:28:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:14.888854 | orchestrator | 2025-09-29 06:28:14 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:14.892453 | orchestrator | 2025-09-29 06:28:14 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:14.895978 | orchestrator | 2025-09-29 06:28:14 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:14.896604 | orchestrator | 2025-09-29 06:28:14 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:14.896653 | orchestrator | 2025-09-29 06:28:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:17.944113 | orchestrator | 2025-09-29 06:28:17 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:17.946395 | orchestrator | 2025-09-29 06:28:17 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state STARTED 2025-09-29 06:28:17.948833 | orchestrator | 2025-09-29 06:28:17 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:17.951205 | orchestrator | 2025-09-29 06:28:17 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:17.951230 | orchestrator | 2025-09-29 06:28:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:20.995905 | orchestrator | 2025-09-29 06:28:20 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:20.998760 | orchestrator | 2025-09-29 06:28:20 | INFO  | Task e3b57b63-0e64-4335-96e1-c5d47524f582 is in state SUCCESS 2025-09-29 06:28:21.001045 | orchestrator | 2025-09-29 06:28:21.001109 | orchestrator | 2025-09-29 06:28:21.001129 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:28:21.001146 | orchestrator | 2025-09-29 06:28:21.001161 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:28:21.001178 | orchestrator | Monday 29 September 2025 06:25:54 +0000 (0:00:00.230) 0:00:00.230 ****** 2025-09-29 06:28:21.001193 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:28:21.001211 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:28:21.001227 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:28:21.001242 | orchestrator | 2025-09-29 06:28:21.001259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:28:21.001275 | orchestrator | Monday 29 September 2025 06:25:54 +0000 (0:00:00.248) 0:00:00.478 ****** 2025-09-29 06:28:21.001290 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-29 06:28:21.001308 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-29 06:28:21.001325 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-29 06:28:21.001342 | orchestrator | 2025-09-29 06:28:21.001358 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-29 06:28:21.001375 | orchestrator | 2025-09-29 06:28:21.001391 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-29 06:28:21.001405 | orchestrator | Monday 29 September 2025 06:25:55 +0000 (0:00:00.736) 0:00:01.214 ****** 2025-09-29 06:28:21.001423 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:28:21.001439 | orchestrator | 2025-09-29 06:28:21.001454 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-29 06:28:21.001523 | orchestrator | Monday 29 September 2025 06:25:56 +0000 (0:00:01.137) 0:00:02.352 ****** 2025-09-29 06:28:21.001613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.001637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.001674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.001716 | orchestrator | 2025-09-29 06:28:21.001733 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-29 06:28:21.001749 | orchestrator | Monday 29 September 2025 06:25:57 +0000 (0:00:00.940) 0:00:03.292 ****** 2025-09-29 06:28:21.001766 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-29 06:28:21.001783 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-29 06:28:21.001799 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:28:21.001816 | orchestrator | 2025-09-29 06:28:21.001831 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-29 06:28:21.001847 | orchestrator | Monday 29 September 2025 06:25:58 +0000 (0:00:01.116) 0:00:04.409 ****** 2025-09-29 06:28:21.001862 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:28:21.001878 | orchestrator | 2025-09-29 06:28:21.001895 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-29 06:28:21.001912 | orchestrator | Monday 29 September 2025 06:25:59 +0000 (0:00:00.527) 0:00:04.937 ****** 2025-09-29 06:28:21.001952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.001972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.001989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002005 | orchestrator | 2025-09-29 06:28:21.002071 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-29 06:28:21.002091 | orchestrator | Monday 29 September 2025 06:26:00 +0000 (0:00:01.499) 0:00:06.436 ****** 2025-09-29 06:28:21.002108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002140 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.002167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002185 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.002216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002234 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.002251 | orchestrator | 2025-09-29 06:28:21.002269 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-29 06:28:21.002286 | orchestrator | Monday 29 September 2025 06:26:00 +0000 (0:00:00.446) 0:00:06.883 ****** 2025-09-29 06:28:21.002304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002340 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.002359 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.002378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-29 06:28:21.002407 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.002425 | orchestrator | 2025-09-29 06:28:21.002443 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-29 06:28:21.002460 | orchestrator | Monday 29 September 2025 06:26:01 +0000 (0:00:00.608) 0:00:07.491 ****** 2025-09-29 06:28:21.002516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002657 | orchestrator | 2025-09-29 06:28:21.002674 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-29 06:28:21.002690 | orchestrator | Monday 29 September 2025 06:26:02 +0000 (0:00:01.032) 0:00:08.524 ****** 2025-09-29 06:28:21.002706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.002772 | orchestrator | 2025-09-29 06:28:21.002789 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-29 06:28:21.002806 | orchestrator | Monday 29 September 2025 06:26:03 +0000 (0:00:01.320) 0:00:09.844 ****** 2025-09-29 06:28:21.002822 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.002838 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.002854 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.002869 | orchestrator | 2025-09-29 06:28:21.002886 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-29 06:28:21.002909 | orchestrator | Monday 29 September 2025 06:26:04 +0000 (0:00:00.416) 0:00:10.260 ****** 2025-09-29 06:28:21.002926 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-29 06:28:21.002944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-29 06:28:21.002961 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-29 06:28:21.002977 | orchestrator | 2025-09-29 06:28:21.002995 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-29 06:28:21.003011 | orchestrator | Monday 29 September 2025 06:26:05 +0000 (0:00:01.144) 0:00:11.405 ****** 2025-09-29 06:28:21.003030 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-29 06:28:21.003048 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-29 06:28:21.003065 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-29 06:28:21.003081 | orchestrator | 2025-09-29 06:28:21.003098 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-29 06:28:21.003115 | orchestrator | Monday 29 September 2025 06:26:06 +0000 (0:00:01.305) 0:00:12.710 ****** 2025-09-29 06:28:21.003182 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:28:21.003201 | orchestrator | 2025-09-29 06:28:21.003218 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-29 06:28:21.003236 | orchestrator | Monday 29 September 2025 06:26:07 +0000 (0:00:00.664) 0:00:13.375 ****** 2025-09-29 06:28:21.003252 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-29 06:28:21.003270 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-29 06:28:21.003287 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:28:21.003304 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:28:21.003321 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:28:21.003338 | orchestrator | 2025-09-29 06:28:21.003355 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-29 06:28:21.003372 | orchestrator | Monday 29 September 2025 06:26:08 +0000 (0:00:00.637) 0:00:14.012 ****** 2025-09-29 06:28:21.003389 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.003407 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.003423 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.003440 | orchestrator | 2025-09-29 06:28:21.003456 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-29 06:28:21.003513 | orchestrator | Monday 29 September 2025 06:26:08 +0000 (0:00:00.380) 0:00:14.392 ****** 2025-09-29 06:28:21.003532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095796, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.989454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095796, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.989454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1095796, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.989454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096008, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0474203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096008, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0474203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1096008, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0474203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095916, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9915063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095916, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9915063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1095916, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9915063, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096009, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.049033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096009, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.049033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1096009, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.049033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095982, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0377817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095982, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0377817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1095982, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0377817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096002, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.003945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096002, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1096002, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095664, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9290137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095664, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9290137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1095664, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9290137, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095909, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9899974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095909, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9899974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1095909, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9899974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095918, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095918, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1095918, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095990, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0398166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095990, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0398166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1095990, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0398166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096007, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0462036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096007, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0462036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1096007, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0462036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095911, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.990807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095911, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.990807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1095911, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.990807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096000, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096000, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1096000, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095985, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0393443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095985, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0393443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1095985, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0393443, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095978, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0372257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095978, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0372257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1095978, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0372257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095942, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0352035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095942, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0352035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1095942, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0352035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095992, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095992, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095920, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1095992, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0432036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095920, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1095920, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124398.9917495, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096006, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096328, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.141626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096006, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1096006, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0452037, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.004995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096328, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.141626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096135, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.092444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1096328, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.141626, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096058, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0628583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096135, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.092444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1096135, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.092444, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096151, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0940824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096058, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0628583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1096058, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0628583, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096013, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096151, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0940824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1096151, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0940824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096192, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1126738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096013, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1096013, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096192, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1126738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096153, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1088622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1096192, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1126738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096153, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1088622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096202, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1096153, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1088622, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096322, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.139404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096202, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1096202, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096190, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1109848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096322, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.139404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1096322, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.139404, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096145, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0933635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096190, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1109848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1096190, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1109848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096064, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0894783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096145, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0933635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1096145, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0933635, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096143, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0929487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096064, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0894783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1096064, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0894783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096062, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0648813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096143, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0929487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1096143, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0929487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096147, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0937576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096062, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0648813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096311, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.138767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1096062, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0648813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096147, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0937576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1096147, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0937576, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096210, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1361372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.005990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096311, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.138767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1096311, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.138767, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096054, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096210, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1361372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096054, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096057, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0619273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1096210, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1361372, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096057, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0619273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096183, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1105187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1096054, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0607045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096206, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096183, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1105187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1096057, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.0619273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096206, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1096183, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1105187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1096206, 'dev': 105, 'nlink': 1, 'atime': 1759104134.0, 'mtime': 1759104134.0, 'ctime': 1759124399.1133118, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-29 06:28:21.006373 | orchestrator | 2025-09-29 06:28:21.006386 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-29 06:28:21.006407 | orchestrator | Monday 29 September 2025 06:26:46 +0000 (0:00:38.020) 0:00:52.412 ****** 2025-09-29 06:28:21.006421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.006434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.006454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-29 06:28:21.006497 | orchestrator | 2025-09-29 06:28:21.006512 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-29 06:28:21.006525 | orchestrator | Monday 29 September 2025 06:26:47 +0000 (0:00:00.911) 0:00:53.324 ****** 2025-09-29 06:28:21.006539 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:28:21.006553 | orchestrator | 2025-09-29 06:28:21.006565 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-29 06:28:21.006577 | orchestrator | Monday 29 September 2025 06:26:49 +0000 (0:00:02.472) 0:00:55.796 ****** 2025-09-29 06:28:21.006589 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:28:21.006602 | orchestrator | 2025-09-29 06:28:21.006616 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-29 06:28:21.006629 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:02.551) 0:00:58.348 ****** 2025-09-29 06:28:21.006640 | orchestrator | 2025-09-29 06:28:21.006652 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-29 06:28:21.006673 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:00.134) 0:00:58.483 ****** 2025-09-29 06:28:21.006685 | orchestrator | 2025-09-29 06:28:21.006698 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-29 06:28:21.006711 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:00.093) 0:00:58.577 ****** 2025-09-29 06:28:21.006723 | orchestrator | 2025-09-29 06:28:21.006736 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-29 06:28:21.006749 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:00.239) 0:00:58.816 ****** 2025-09-29 06:28:21.006762 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.006775 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.006789 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:28:21.006814 | orchestrator | 2025-09-29 06:28:21.006828 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-29 06:28:21.006841 | orchestrator | Monday 29 September 2025 06:26:55 +0000 (0:00:02.183) 0:01:00.999 ****** 2025-09-29 06:28:21.006854 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.006867 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.006880 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-29 06:28:21.006894 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-29 06:28:21.006906 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-29 06:28:21.006920 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:28:21.006932 | orchestrator | 2025-09-29 06:28:21.006943 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-29 06:28:21.006953 | orchestrator | Monday 29 September 2025 06:27:35 +0000 (0:00:40.186) 0:01:41.186 ****** 2025-09-29 06:28:21.006964 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.006974 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:28:21.006984 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:28:21.006995 | orchestrator | 2025-09-29 06:28:21.007005 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-29 06:28:21.007015 | orchestrator | Monday 29 September 2025 06:28:12 +0000 (0:00:37.221) 0:02:18.407 ****** 2025-09-29 06:28:21.007026 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:28:21.007036 | orchestrator | 2025-09-29 06:28:21.007046 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-29 06:28:21.007056 | orchestrator | Monday 29 September 2025 06:28:14 +0000 (0:00:02.465) 0:02:20.873 ****** 2025-09-29 06:28:21.007066 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.007076 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:28:21.007086 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:28:21.007096 | orchestrator | 2025-09-29 06:28:21.007107 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-29 06:28:21.007117 | orchestrator | Monday 29 September 2025 06:28:15 +0000 (0:00:00.521) 0:02:21.394 ****** 2025-09-29 06:28:21.007129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-29 06:28:21.007144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-29 06:28:21.007155 | orchestrator | 2025-09-29 06:28:21.007165 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-29 06:28:21.007175 | orchestrator | Monday 29 September 2025 06:28:18 +0000 (0:00:02.660) 0:02:24.055 ****** 2025-09-29 06:28:21.007185 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:28:21.007195 | orchestrator | 2025-09-29 06:28:21.007206 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:28:21.007227 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-29 06:28:21.007239 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-29 06:28:21.007249 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-29 06:28:21.007258 | orchestrator | 2025-09-29 06:28:21.007278 | orchestrator | 2025-09-29 06:28:21.007287 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:28:21.007297 | orchestrator | Monday 29 September 2025 06:28:18 +0000 (0:00:00.264) 0:02:24.320 ****** 2025-09-29 06:28:21.007307 | orchestrator | =============================================================================== 2025-09-29 06:28:21.007318 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 40.19s 2025-09-29 06:28:21.007328 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.02s 2025-09-29 06:28:21.007338 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.22s 2025-09-29 06:28:21.007348 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.66s 2025-09-29 06:28:21.007359 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.55s 2025-09-29 06:28:21.007380 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.47s 2025-09-29 06:28:21.007390 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2025-09-29 06:28:21.007401 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.18s 2025-09-29 06:28:21.007412 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.50s 2025-09-29 06:28:21.007422 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-09-29 06:28:21.007433 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.31s 2025-09-29 06:28:21.007443 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.14s 2025-09-29 06:28:21.007453 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.14s 2025-09-29 06:28:21.007464 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.12s 2025-09-29 06:28:21.007539 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.03s 2025-09-29 06:28:21.007549 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.94s 2025-09-29 06:28:21.007558 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2025-09-29 06:28:21.007567 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2025-09-29 06:28:21.007577 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.66s 2025-09-29 06:28:21.007587 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.64s 2025-09-29 06:28:21.007598 | orchestrator | 2025-09-29 06:28:21 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:21.007610 | orchestrator | 2025-09-29 06:28:21 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:21.007621 | orchestrator | 2025-09-29 06:28:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:24.054077 | orchestrator | 2025-09-29 06:28:24 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:24.055867 | orchestrator | 2025-09-29 06:28:24 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:24.058559 | orchestrator | 2025-09-29 06:28:24 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:24.058610 | orchestrator | 2025-09-29 06:28:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:27.096737 | orchestrator | 2025-09-29 06:28:27 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:27.098298 | orchestrator | 2025-09-29 06:28:27 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:27.099192 | orchestrator | 2025-09-29 06:28:27 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:27.099227 | orchestrator | 2025-09-29 06:28:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:30.137272 | orchestrator | 2025-09-29 06:28:30 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:30.138359 | orchestrator | 2025-09-29 06:28:30 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:30.139804 | orchestrator | 2025-09-29 06:28:30 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:30.139840 | orchestrator | 2025-09-29 06:28:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:33.179959 | orchestrator | 2025-09-29 06:28:33 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:33.181424 | orchestrator | 2025-09-29 06:28:33 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:33.183320 | orchestrator | 2025-09-29 06:28:33 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:33.183349 | orchestrator | 2025-09-29 06:28:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:36.220929 | orchestrator | 2025-09-29 06:28:36 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:36.222362 | orchestrator | 2025-09-29 06:28:36 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:36.223912 | orchestrator | 2025-09-29 06:28:36 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:36.224076 | orchestrator | 2025-09-29 06:28:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:39.266324 | orchestrator | 2025-09-29 06:28:39 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:39.267611 | orchestrator | 2025-09-29 06:28:39 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:39.270094 | orchestrator | 2025-09-29 06:28:39 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:39.270191 | orchestrator | 2025-09-29 06:28:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:42.316959 | orchestrator | 2025-09-29 06:28:42 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:42.317053 | orchestrator | 2025-09-29 06:28:42 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:42.317071 | orchestrator | 2025-09-29 06:28:42 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:42.317088 | orchestrator | 2025-09-29 06:28:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:45.360520 | orchestrator | 2025-09-29 06:28:45 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:45.362528 | orchestrator | 2025-09-29 06:28:45 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:45.364866 | orchestrator | 2025-09-29 06:28:45 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:45.364907 | orchestrator | 2025-09-29 06:28:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:48.416922 | orchestrator | 2025-09-29 06:28:48 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state STARTED 2025-09-29 06:28:48.419140 | orchestrator | 2025-09-29 06:28:48 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:48.421363 | orchestrator | 2025-09-29 06:28:48 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:48.421393 | orchestrator | 2025-09-29 06:28:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:51.466892 | orchestrator | 2025-09-29 06:28:51 | INFO  | Task ec211a78-588b-4c0b-823c-6f4326b180c2 is in state SUCCESS 2025-09-29 06:28:51.468506 | orchestrator | 2025-09-29 06:28:51 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:51.472109 | orchestrator | 2025-09-29 06:28:51 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:51.472163 | orchestrator | 2025-09-29 06:28:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:54.514131 | orchestrator | 2025-09-29 06:28:54 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:54.515929 | orchestrator | 2025-09-29 06:28:54 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:54.516036 | orchestrator | 2025-09-29 06:28:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:28:57.555978 | orchestrator | 2025-09-29 06:28:57 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:28:57.558091 | orchestrator | 2025-09-29 06:28:57 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:28:57.558585 | orchestrator | 2025-09-29 06:28:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:00.610262 | orchestrator | 2025-09-29 06:29:00 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:00.612479 | orchestrator | 2025-09-29 06:29:00 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:00.612525 | orchestrator | 2025-09-29 06:29:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:03.660306 | orchestrator | 2025-09-29 06:29:03 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:03.661119 | orchestrator | 2025-09-29 06:29:03 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:03.661245 | orchestrator | 2025-09-29 06:29:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:06.708195 | orchestrator | 2025-09-29 06:29:06 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:06.710193 | orchestrator | 2025-09-29 06:29:06 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:06.710250 | orchestrator | 2025-09-29 06:29:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:09.754753 | orchestrator | 2025-09-29 06:29:09 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:09.755507 | orchestrator | 2025-09-29 06:29:09 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:09.756192 | orchestrator | 2025-09-29 06:29:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:12.798739 | orchestrator | 2025-09-29 06:29:12 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:12.800911 | orchestrator | 2025-09-29 06:29:12 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:12.801015 | orchestrator | 2025-09-29 06:29:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:15.839757 | orchestrator | 2025-09-29 06:29:15 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:15.841247 | orchestrator | 2025-09-29 06:29:15 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:15.841301 | orchestrator | 2025-09-29 06:29:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:18.887426 | orchestrator | 2025-09-29 06:29:18 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:18.889870 | orchestrator | 2025-09-29 06:29:18 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:18.890221 | orchestrator | 2025-09-29 06:29:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:21.929868 | orchestrator | 2025-09-29 06:29:21 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:21.931112 | orchestrator | 2025-09-29 06:29:21 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:21.931146 | orchestrator | 2025-09-29 06:29:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:24.973559 | orchestrator | 2025-09-29 06:29:24 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:24.975200 | orchestrator | 2025-09-29 06:29:24 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:24.975230 | orchestrator | 2025-09-29 06:29:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:28.020356 | orchestrator | 2025-09-29 06:29:28 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:28.021243 | orchestrator | 2025-09-29 06:29:28 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:28.021302 | orchestrator | 2025-09-29 06:29:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:31.068520 | orchestrator | 2025-09-29 06:29:31 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:31.070054 | orchestrator | 2025-09-29 06:29:31 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:31.070114 | orchestrator | 2025-09-29 06:29:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:34.113041 | orchestrator | 2025-09-29 06:29:34 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:34.114249 | orchestrator | 2025-09-29 06:29:34 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:34.114299 | orchestrator | 2025-09-29 06:29:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:37.151309 | orchestrator | 2025-09-29 06:29:37 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:37.151466 | orchestrator | 2025-09-29 06:29:37 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:37.151478 | orchestrator | 2025-09-29 06:29:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:40.197303 | orchestrator | 2025-09-29 06:29:40 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:40.198948 | orchestrator | 2025-09-29 06:29:40 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:40.199029 | orchestrator | 2025-09-29 06:29:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:43.251970 | orchestrator | 2025-09-29 06:29:43 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:43.252743 | orchestrator | 2025-09-29 06:29:43 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:43.252771 | orchestrator | 2025-09-29 06:29:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:46.284908 | orchestrator | 2025-09-29 06:29:46 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:46.286154 | orchestrator | 2025-09-29 06:29:46 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state STARTED 2025-09-29 06:29:46.286190 | orchestrator | 2025-09-29 06:29:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:49.329131 | orchestrator | 2025-09-29 06:29:49 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:49.329951 | orchestrator | 2025-09-29 06:29:49 | INFO  | Task 9db7b8a2-eef4-47e8-95f1-f451f383ef41 is in state SUCCESS 2025-09-29 06:29:49.329978 | orchestrator | 2025-09-29 06:29:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:52.364826 | orchestrator | 2025-09-29 06:29:52 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:52.364935 | orchestrator | 2025-09-29 06:29:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:55.403732 | orchestrator | 2025-09-29 06:29:55 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:55.403816 | orchestrator | 2025-09-29 06:29:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:29:58.450524 | orchestrator | 2025-09-29 06:29:58 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:29:58.450645 | orchestrator | 2025-09-29 06:29:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:01.501523 | orchestrator | 2025-09-29 06:30:01 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:01.501592 | orchestrator | 2025-09-29 06:30:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:04.542630 | orchestrator | 2025-09-29 06:30:04 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:04.542733 | orchestrator | 2025-09-29 06:30:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:07.575231 | orchestrator | 2025-09-29 06:30:07 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:07.575329 | orchestrator | 2025-09-29 06:30:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:10.602568 | orchestrator | 2025-09-29 06:30:10 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:10.602705 | orchestrator | 2025-09-29 06:30:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:13.649985 | orchestrator | 2025-09-29 06:30:13 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:13.650175 | orchestrator | 2025-09-29 06:30:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:16.695557 | orchestrator | 2025-09-29 06:30:16 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:16.695647 | orchestrator | 2025-09-29 06:30:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:19.741997 | orchestrator | 2025-09-29 06:30:19 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:19.742130 | orchestrator | 2025-09-29 06:30:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:22.782612 | orchestrator | 2025-09-29 06:30:22 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:22.782742 | orchestrator | 2025-09-29 06:30:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:25.825844 | orchestrator | 2025-09-29 06:30:25 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:25.825947 | orchestrator | 2025-09-29 06:30:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:28.861705 | orchestrator | 2025-09-29 06:30:28 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:28.861814 | orchestrator | 2025-09-29 06:30:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:31.901494 | orchestrator | 2025-09-29 06:30:31 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:31.901559 | orchestrator | 2025-09-29 06:30:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:34.938175 | orchestrator | 2025-09-29 06:30:34 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:34.938246 | orchestrator | 2025-09-29 06:30:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:37.965144 | orchestrator | 2025-09-29 06:30:37 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:37.965227 | orchestrator | 2025-09-29 06:30:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:41.015278 | orchestrator | 2025-09-29 06:30:41 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:41.015490 | orchestrator | 2025-09-29 06:30:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:44.050375 | orchestrator | 2025-09-29 06:30:44 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:44.050504 | orchestrator | 2025-09-29 06:30:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:47.083565 | orchestrator | 2025-09-29 06:30:47 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:47.083697 | orchestrator | 2025-09-29 06:30:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:50.114289 | orchestrator | 2025-09-29 06:30:50 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:50.114446 | orchestrator | 2025-09-29 06:30:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:53.137901 | orchestrator | 2025-09-29 06:30:53 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:53.138108 | orchestrator | 2025-09-29 06:30:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:56.181021 | orchestrator | 2025-09-29 06:30:56 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:56.181158 | orchestrator | 2025-09-29 06:30:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:30:59.212623 | orchestrator | 2025-09-29 06:30:59 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:30:59.212741 | orchestrator | 2025-09-29 06:30:59 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:02.252887 | orchestrator | 2025-09-29 06:31:02 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:02.252998 | orchestrator | 2025-09-29 06:31:02 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:05.289036 | orchestrator | 2025-09-29 06:31:05 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:05.289198 | orchestrator | 2025-09-29 06:31:05 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:08.324554 | orchestrator | 2025-09-29 06:31:08 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:08.324688 | orchestrator | 2025-09-29 06:31:08 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:11.381973 | orchestrator | 2025-09-29 06:31:11 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:11.382086 | orchestrator | 2025-09-29 06:31:11 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:14.415122 | orchestrator | 2025-09-29 06:31:14 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:14.415227 | orchestrator | 2025-09-29 06:31:14 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:17.459614 | orchestrator | 2025-09-29 06:31:17 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:17.459711 | orchestrator | 2025-09-29 06:31:17 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:20.510985 | orchestrator | 2025-09-29 06:31:20 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:20.511056 | orchestrator | 2025-09-29 06:31:20 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:23.546683 | orchestrator | 2025-09-29 06:31:23 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:23.546753 | orchestrator | 2025-09-29 06:31:23 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:26.592075 | orchestrator | 2025-09-29 06:31:26 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:26.592178 | orchestrator | 2025-09-29 06:31:26 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:29.633048 | orchestrator | 2025-09-29 06:31:29 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:29.633154 | orchestrator | 2025-09-29 06:31:29 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:32.671218 | orchestrator | 2025-09-29 06:31:32 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:32.671333 | orchestrator | 2025-09-29 06:31:32 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:35.701545 | orchestrator | 2025-09-29 06:31:35 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:35.701669 | orchestrator | 2025-09-29 06:31:35 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:38.735183 | orchestrator | 2025-09-29 06:31:38 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:38.735257 | orchestrator | 2025-09-29 06:31:38 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:41.778319 | orchestrator | 2025-09-29 06:31:41 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:41.778453 | orchestrator | 2025-09-29 06:31:41 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:44.824536 | orchestrator | 2025-09-29 06:31:44 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:44.824635 | orchestrator | 2025-09-29 06:31:44 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:47.865587 | orchestrator | 2025-09-29 06:31:47 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:47.865688 | orchestrator | 2025-09-29 06:31:47 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:50.911224 | orchestrator | 2025-09-29 06:31:50 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:50.911324 | orchestrator | 2025-09-29 06:31:50 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:53.958550 | orchestrator | 2025-09-29 06:31:53 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:53.958619 | orchestrator | 2025-09-29 06:31:53 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:31:56.999139 | orchestrator | 2025-09-29 06:31:56 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:31:56.999401 | orchestrator | 2025-09-29 06:31:56 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:00.052541 | orchestrator | 2025-09-29 06:32:00 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:00.052680 | orchestrator | 2025-09-29 06:32:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:03.106565 | orchestrator | 2025-09-29 06:32:03 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:03.108736 | orchestrator | 2025-09-29 06:32:03 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:06.156812 | orchestrator | 2025-09-29 06:32:06 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:06.156897 | orchestrator | 2025-09-29 06:32:06 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:09.201629 | orchestrator | 2025-09-29 06:32:09 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:09.201745 | orchestrator | 2025-09-29 06:32:09 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:12.251697 | orchestrator | 2025-09-29 06:32:12 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:12.251801 | orchestrator | 2025-09-29 06:32:12 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:15.295055 | orchestrator | 2025-09-29 06:32:15 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:15.295176 | orchestrator | 2025-09-29 06:32:15 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:18.333000 | orchestrator | 2025-09-29 06:32:18 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:18.333145 | orchestrator | 2025-09-29 06:32:18 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:21.382826 | orchestrator | 2025-09-29 06:32:21 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:21.382929 | orchestrator | 2025-09-29 06:32:21 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:24.431135 | orchestrator | 2025-09-29 06:32:24 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:24.431219 | orchestrator | 2025-09-29 06:32:24 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:27.473426 | orchestrator | 2025-09-29 06:32:27 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:27.473539 | orchestrator | 2025-09-29 06:32:27 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:30.507797 | orchestrator | 2025-09-29 06:32:30 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:30.507908 | orchestrator | 2025-09-29 06:32:30 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:33.554923 | orchestrator | 2025-09-29 06:32:33 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:33.555018 | orchestrator | 2025-09-29 06:32:33 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:36.605764 | orchestrator | 2025-09-29 06:32:36 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:36.605889 | orchestrator | 2025-09-29 06:32:36 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:39.657474 | orchestrator | 2025-09-29 06:32:39 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:39.657573 | orchestrator | 2025-09-29 06:32:39 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:42.705129 | orchestrator | 2025-09-29 06:32:42 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:42.705244 | orchestrator | 2025-09-29 06:32:42 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:45.744917 | orchestrator | 2025-09-29 06:32:45 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:45.745017 | orchestrator | 2025-09-29 06:32:45 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:48.793663 | orchestrator | 2025-09-29 06:32:48 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:48.793836 | orchestrator | 2025-09-29 06:32:48 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:51.838596 | orchestrator | 2025-09-29 06:32:51 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:51.838701 | orchestrator | 2025-09-29 06:32:51 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:54.885027 | orchestrator | 2025-09-29 06:32:54 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:54.885136 | orchestrator | 2025-09-29 06:32:54 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:32:57.931761 | orchestrator | 2025-09-29 06:32:57 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:32:57.931848 | orchestrator | 2025-09-29 06:32:57 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:00.978952 | orchestrator | 2025-09-29 06:33:00 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:00.979052 | orchestrator | 2025-09-29 06:33:00 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:04.020626 | orchestrator | 2025-09-29 06:33:04 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:04.020867 | orchestrator | 2025-09-29 06:33:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:07.065026 | orchestrator | 2025-09-29 06:33:07 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:07.065110 | orchestrator | 2025-09-29 06:33:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:10.106782 | orchestrator | 2025-09-29 06:33:10 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:10.106891 | orchestrator | 2025-09-29 06:33:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:13.146677 | orchestrator | 2025-09-29 06:33:13 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:13.146774 | orchestrator | 2025-09-29 06:33:13 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:16.186811 | orchestrator | 2025-09-29 06:33:16 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:16.186910 | orchestrator | 2025-09-29 06:33:16 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:19.226659 | orchestrator | 2025-09-29 06:33:19 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:19.226756 | orchestrator | 2025-09-29 06:33:19 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:22.274294 | orchestrator | 2025-09-29 06:33:22 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:22.274477 | orchestrator | 2025-09-29 06:33:22 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:25.318480 | orchestrator | 2025-09-29 06:33:25 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:25.318621 | orchestrator | 2025-09-29 06:33:25 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:28.348841 | orchestrator | 2025-09-29 06:33:28 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:28.348938 | orchestrator | 2025-09-29 06:33:28 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:31.384265 | orchestrator | 2025-09-29 06:33:31 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:31.385187 | orchestrator | 2025-09-29 06:33:31 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:34.425465 | orchestrator | 2025-09-29 06:33:34 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:34.425618 | orchestrator | 2025-09-29 06:33:34 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:37.484961 | orchestrator | 2025-09-29 06:33:37 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:37.485127 | orchestrator | 2025-09-29 06:33:37 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:40.532124 | orchestrator | 2025-09-29 06:33:40 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:40.532193 | orchestrator | 2025-09-29 06:33:40 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:43.577591 | orchestrator | 2025-09-29 06:33:43 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:43.577697 | orchestrator | 2025-09-29 06:33:43 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:46.620481 | orchestrator | 2025-09-29 06:33:46 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:46.620585 | orchestrator | 2025-09-29 06:33:46 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:49.666612 | orchestrator | 2025-09-29 06:33:49 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:49.666711 | orchestrator | 2025-09-29 06:33:49 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:52.720446 | orchestrator | 2025-09-29 06:33:52 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:52.720574 | orchestrator | 2025-09-29 06:33:52 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:55.767693 | orchestrator | 2025-09-29 06:33:55 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:55.767797 | orchestrator | 2025-09-29 06:33:55 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:33:58.805135 | orchestrator | 2025-09-29 06:33:58 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:33:58.805389 | orchestrator | 2025-09-29 06:33:58 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:34:01.846012 | orchestrator | 2025-09-29 06:34:01 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:34:01.846220 | orchestrator | 2025-09-29 06:34:01 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:34:04.892588 | orchestrator | 2025-09-29 06:34:04 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:34:04.892725 | orchestrator | 2025-09-29 06:34:04 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:34:07.938386 | orchestrator | 2025-09-29 06:34:07 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:34:07.938492 | orchestrator | 2025-09-29 06:34:07 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:34:10.988601 | orchestrator | 2025-09-29 06:34:10 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state STARTED 2025-09-29 06:34:10.988703 | orchestrator | 2025-09-29 06:34:10 | INFO  | Wait 1 second(s) until the next check 2025-09-29 06:34:14.028931 | orchestrator | 2025-09-29 06:34:14 | INFO  | Task ba24a910-7376-4d2f-9061-4ec014169a77 is in state SUCCESS 2025-09-29 06:34:14.031444 | orchestrator | 2025-09-29 06:34:14.031521 | orchestrator | 2025-09-29 06:34:14.031536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:34:14.031548 | orchestrator | 2025-09-29 06:34:14.031559 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:34:14.031570 | orchestrator | Monday 29 September 2025 06:27:55 +0000 (0:00:00.238) 0:00:00.238 ****** 2025-09-29 06:34:14.031580 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.031591 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:34:14.031625 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:34:14.031635 | orchestrator | 2025-09-29 06:34:14.031645 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:34:14.031655 | orchestrator | Monday 29 September 2025 06:27:55 +0000 (0:00:00.262) 0:00:00.501 ****** 2025-09-29 06:34:14.031664 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-29 06:34:14.031689 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-29 06:34:14.031699 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-29 06:34:14.031708 | orchestrator | 2025-09-29 06:34:14.031718 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-29 06:34:14.031774 | orchestrator | 2025-09-29 06:34:14.031785 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-29 06:34:14.031820 | orchestrator | Monday 29 September 2025 06:27:55 +0000 (0:00:00.330) 0:00:00.831 ****** 2025-09-29 06:34:14.031830 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.031841 | orchestrator | 2025-09-29 06:34:14.031851 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-29 06:34:14.031909 | orchestrator | Monday 29 September 2025 06:27:56 +0000 (0:00:00.469) 0:00:01.301 ****** 2025-09-29 06:34:14.031922 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-29 06:34:14.031931 | orchestrator | 2025-09-29 06:34:14.031941 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-29 06:34:14.031950 | orchestrator | Monday 29 September 2025 06:27:59 +0000 (0:00:03.397) 0:00:04.699 ****** 2025-09-29 06:34:14.031960 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-29 06:34:14.031970 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-29 06:34:14.031979 | orchestrator | 2025-09-29 06:34:14.031989 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-29 06:34:14.031998 | orchestrator | Monday 29 September 2025 06:28:06 +0000 (0:00:07.019) 0:00:11.718 ****** 2025-09-29 06:34:14.032008 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:34:14.032018 | orchestrator | 2025-09-29 06:34:14.032027 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-29 06:34:14.032119 | orchestrator | Monday 29 September 2025 06:28:10 +0000 (0:00:03.527) 0:00:15.246 ****** 2025-09-29 06:34:14.032130 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:34:14.032140 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-29 06:34:14.032150 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-29 06:34:14.032159 | orchestrator | 2025-09-29 06:34:14.032169 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-29 06:34:14.032191 | orchestrator | Monday 29 September 2025 06:28:19 +0000 (0:00:09.067) 0:00:24.314 ****** 2025-09-29 06:34:14.032201 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:34:14.032210 | orchestrator | 2025-09-29 06:34:14.032220 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-29 06:34:14.032230 | orchestrator | Monday 29 September 2025 06:28:22 +0000 (0:00:03.500) 0:00:27.814 ****** 2025-09-29 06:34:14.032239 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-29 06:34:14.032249 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-29 06:34:14.032258 | orchestrator | 2025-09-29 06:34:14.032294 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-29 06:34:14.032312 | orchestrator | Monday 29 September 2025 06:28:30 +0000 (0:00:07.697) 0:00:35.512 ****** 2025-09-29 06:34:14.032323 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-29 06:34:14.032332 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-29 06:34:14.032352 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-29 06:34:14.032362 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-29 06:34:14.032371 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-29 06:34:14.032398 | orchestrator | 2025-09-29 06:34:14.032408 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-29 06:34:14.032418 | orchestrator | Monday 29 September 2025 06:28:46 +0000 (0:00:16.241) 0:00:51.753 ****** 2025-09-29 06:34:14.032427 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.032437 | orchestrator | 2025-09-29 06:34:14.032457 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-29 06:34:14.032467 | orchestrator | Monday 29 September 2025 06:28:47 +0000 (0:00:00.551) 0:00:52.305 ****** 2025-09-29 06:34:14.032498 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-09-29 06:34:14.032512 | orchestrator | 2025-09-29 06:34:14.032522 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:34:14.032533 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.032544 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.032606 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.032617 | orchestrator | 2025-09-29 06:34:14.032626 | orchestrator | 2025-09-29 06:34:14.032636 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:34:14.032645 | orchestrator | Monday 29 September 2025 06:28:50 +0000 (0:00:03.359) 0:00:55.664 ****** 2025-09-29 06:34:14.032655 | orchestrator | =============================================================================== 2025-09-29 06:34:14.032664 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.24s 2025-09-29 06:34:14.032674 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.07s 2025-09-29 06:34:14.032683 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.70s 2025-09-29 06:34:14.032693 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.02s 2025-09-29 06:34:14.032702 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.53s 2025-09-29 06:34:14.032712 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.50s 2025-09-29 06:34:14.032721 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.40s 2025-09-29 06:34:14.032731 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.36s 2025-09-29 06:34:14.032740 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-09-29 06:34:14.032750 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.47s 2025-09-29 06:34:14.032759 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.33s 2025-09-29 06:34:14.032769 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-29 06:34:14.032778 | orchestrator | 2025-09-29 06:34:14.032788 | orchestrator | 2025-09-29 06:34:14.032798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:34:14.032807 | orchestrator | 2025-09-29 06:34:14.032817 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:34:14.032834 | orchestrator | Monday 29 September 2025 06:27:27 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-29 06:34:14.032844 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.032854 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:34:14.032863 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:34:14.032873 | orchestrator | 2025-09-29 06:34:14.032883 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:34:14.032892 | orchestrator | Monday 29 September 2025 06:27:27 +0000 (0:00:00.277) 0:00:00.439 ****** 2025-09-29 06:34:14.032902 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-29 06:34:14.032912 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-29 06:34:14.032921 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-29 06:34:14.032931 | orchestrator | 2025-09-29 06:34:14.032940 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-29 06:34:14.032949 | orchestrator | 2025-09-29 06:34:14.032959 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-29 06:34:14.032968 | orchestrator | Monday 29 September 2025 06:27:28 +0000 (0:00:00.502) 0:00:00.942 ****** 2025-09-29 06:34:14.032978 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.032987 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:34:14.032997 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:34:14.033006 | orchestrator | 2025-09-29 06:34:14.033016 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:34:14.033026 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.033035 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.033045 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.033055 | orchestrator | 2025-09-29 06:34:14.033064 | orchestrator | 2025-09-29 06:34:14.033077 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:34:14.033093 | orchestrator | Monday 29 September 2025 06:29:48 +0000 (0:02:20.707) 0:02:21.649 ****** 2025-09-29 06:34:14.033108 | orchestrator | =============================================================================== 2025-09-29 06:34:14.033124 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 140.71s 2025-09-29 06:34:14.033139 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-09-29 06:34:14.033155 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-29 06:34:14.033390 | orchestrator | 2025-09-29 06:34:14.033411 | orchestrator | 2025-09-29 06:34:14.033422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:34:14.033432 | orchestrator | 2025-09-29 06:34:14.033443 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-29 06:34:14.033466 | orchestrator | Monday 29 September 2025 06:25:25 +0000 (0:00:00.567) 0:00:00.567 ****** 2025-09-29 06:34:14.033478 | orchestrator | changed: [testbed-manager] 2025-09-29 06:34:14.033489 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.033500 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.033510 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.033521 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.033532 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.033542 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.033553 | orchestrator | 2025-09-29 06:34:14.033564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:34:14.033575 | orchestrator | Monday 29 September 2025 06:25:26 +0000 (0:00:00.767) 0:00:01.335 ****** 2025-09-29 06:34:14.033586 | orchestrator | changed: [testbed-manager] 2025-09-29 06:34:14.033596 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.033627 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.033653 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.033670 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.033687 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.033707 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.033726 | orchestrator | 2025-09-29 06:34:14.033744 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:34:14.033761 | orchestrator | Monday 29 September 2025 06:25:27 +0000 (0:00:00.601) 0:00:01.937 ****** 2025-09-29 06:34:14.033772 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-29 06:34:14.033782 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-29 06:34:14.033793 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-29 06:34:14.033804 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-29 06:34:14.033814 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-29 06:34:14.033825 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-29 06:34:14.033835 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-29 06:34:14.033846 | orchestrator | 2025-09-29 06:34:14.033856 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-29 06:34:14.033867 | orchestrator | 2025-09-29 06:34:14.033877 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-29 06:34:14.033888 | orchestrator | Monday 29 September 2025 06:25:27 +0000 (0:00:00.741) 0:00:02.678 ****** 2025-09-29 06:34:14.033899 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.033909 | orchestrator | 2025-09-29 06:34:14.033920 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-29 06:34:14.033930 | orchestrator | Monday 29 September 2025 06:25:28 +0000 (0:00:00.852) 0:00:03.531 ****** 2025-09-29 06:34:14.033941 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-29 06:34:14.033952 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-29 06:34:14.033962 | orchestrator | 2025-09-29 06:34:14.033973 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-29 06:34:14.033983 | orchestrator | Monday 29 September 2025 06:25:33 +0000 (0:00:04.535) 0:00:08.066 ****** 2025-09-29 06:34:14.033994 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:34:14.034005 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-29 06:34:14.034065 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034080 | orchestrator | 2025-09-29 06:34:14.034091 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-29 06:34:14.034102 | orchestrator | Monday 29 September 2025 06:25:38 +0000 (0:00:04.803) 0:00:12.869 ****** 2025-09-29 06:34:14.034113 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034124 | orchestrator | 2025-09-29 06:34:14.034134 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-29 06:34:14.034145 | orchestrator | Monday 29 September 2025 06:25:38 +0000 (0:00:00.776) 0:00:13.646 ****** 2025-09-29 06:34:14.034156 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034166 | orchestrator | 2025-09-29 06:34:14.034177 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-29 06:34:14.034188 | orchestrator | Monday 29 September 2025 06:25:40 +0000 (0:00:01.594) 0:00:15.241 ****** 2025-09-29 06:34:14.034199 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034210 | orchestrator | 2025-09-29 06:34:14.034220 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-29 06:34:14.034231 | orchestrator | Monday 29 September 2025 06:25:43 +0000 (0:00:02.643) 0:00:17.885 ****** 2025-09-29 06:34:14.034244 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.034333 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.034358 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.034377 | orchestrator | 2025-09-29 06:34:14.034397 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-29 06:34:14.034428 | orchestrator | Monday 29 September 2025 06:25:43 +0000 (0:00:00.288) 0:00:18.173 ****** 2025-09-29 06:34:14.034445 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.034456 | orchestrator | 2025-09-29 06:34:14.034467 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-29 06:34:14.034478 | orchestrator | Monday 29 September 2025 06:26:18 +0000 (0:00:34.821) 0:00:52.995 ****** 2025-09-29 06:34:14.034488 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034499 | orchestrator | 2025-09-29 06:34:14.034510 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-29 06:34:14.034521 | orchestrator | Monday 29 September 2025 06:26:35 +0000 (0:00:17.176) 0:01:10.172 ****** 2025-09-29 06:34:14.034531 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.034542 | orchestrator | 2025-09-29 06:34:14.034553 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-29 06:34:14.034563 | orchestrator | Monday 29 September 2025 06:26:50 +0000 (0:00:15.177) 0:01:25.349 ****** 2025-09-29 06:34:14.034574 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.034584 | orchestrator | 2025-09-29 06:34:14.034595 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-29 06:34:14.034606 | orchestrator | Monday 29 September 2025 06:26:51 +0000 (0:00:01.039) 0:01:26.389 ****** 2025-09-29 06:34:14.034616 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.034627 | orchestrator | 2025-09-29 06:34:14.034650 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-29 06:34:14.034661 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:00.528) 0:01:26.917 ****** 2025-09-29 06:34:14.034672 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.034683 | orchestrator | 2025-09-29 06:34:14.034694 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-29 06:34:14.034704 | orchestrator | Monday 29 September 2025 06:26:52 +0000 (0:00:00.707) 0:01:27.625 ****** 2025-09-29 06:34:14.034715 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.034726 | orchestrator | 2025-09-29 06:34:14.034737 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-29 06:34:14.034754 | orchestrator | Monday 29 September 2025 06:27:14 +0000 (0:00:22.059) 0:01:49.684 ****** 2025-09-29 06:34:14.034765 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.034776 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.034787 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.034798 | orchestrator | 2025-09-29 06:34:14.034808 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-29 06:34:14.034819 | orchestrator | 2025-09-29 06:34:14.034830 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-29 06:34:14.034840 | orchestrator | Monday 29 September 2025 06:27:15 +0000 (0:00:00.273) 0:01:49.958 ****** 2025-09-29 06:34:14.034851 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.034861 | orchestrator | 2025-09-29 06:34:14.034872 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-29 06:34:14.034882 | orchestrator | Monday 29 September 2025 06:27:15 +0000 (0:00:00.491) 0:01:50.449 ****** 2025-09-29 06:34:14.034893 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.034904 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.034915 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034924 | orchestrator | 2025-09-29 06:34:14.034933 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-29 06:34:14.034943 | orchestrator | Monday 29 September 2025 06:27:18 +0000 (0:00:02.506) 0:01:52.955 ****** 2025-09-29 06:34:14.034952 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.034962 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.034972 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.034981 | orchestrator | 2025-09-29 06:34:14.034998 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-29 06:34:14.035008 | orchestrator | Monday 29 September 2025 06:27:20 +0000 (0:00:02.620) 0:01:55.575 ****** 2025-09-29 06:34:14.035017 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.035026 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035036 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035046 | orchestrator | 2025-09-29 06:34:14.035055 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-29 06:34:14.035065 | orchestrator | Monday 29 September 2025 06:27:21 +0000 (0:00:00.299) 0:01:55.875 ****** 2025-09-29 06:34:14.035074 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-29 06:34:14.035084 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035093 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-29 06:34:14.035103 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035113 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-29 06:34:14.035122 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-29 06:34:14.035132 | orchestrator | 2025-09-29 06:34:14.035141 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-29 06:34:14.035151 | orchestrator | Monday 29 September 2025 06:27:30 +0000 (0:00:09.752) 0:02:05.627 ****** 2025-09-29 06:34:14.035160 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.035170 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035179 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035189 | orchestrator | 2025-09-29 06:34:14.035198 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-29 06:34:14.035208 | orchestrator | Monday 29 September 2025 06:27:31 +0000 (0:00:00.309) 0:02:05.937 ****** 2025-09-29 06:34:14.035217 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-29 06:34:14.035227 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.035236 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-29 06:34:14.035246 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035255 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-29 06:34:14.035294 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035314 | orchestrator | 2025-09-29 06:34:14.035331 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-29 06:34:14.035346 | orchestrator | Monday 29 September 2025 06:27:31 +0000 (0:00:00.539) 0:02:06.477 ****** 2025-09-29 06:34:14.035362 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035377 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035394 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.035411 | orchestrator | 2025-09-29 06:34:14.035427 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-29 06:34:14.035443 | orchestrator | Monday 29 September 2025 06:27:32 +0000 (0:00:00.632) 0:02:07.109 ****** 2025-09-29 06:34:14.035459 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035474 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035492 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.035508 | orchestrator | 2025-09-29 06:34:14.035525 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-29 06:34:14.035536 | orchestrator | Monday 29 September 2025 06:27:33 +0000 (0:00:01.064) 0:02:08.174 ****** 2025-09-29 06:34:14.035546 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035556 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035566 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.035576 | orchestrator | 2025-09-29 06:34:14.035585 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-29 06:34:14.035595 | orchestrator | Monday 29 September 2025 06:27:35 +0000 (0:00:02.054) 0:02:10.228 ****** 2025-09-29 06:34:14.035623 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035633 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035643 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.035661 | orchestrator | 2025-09-29 06:34:14.035671 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-29 06:34:14.035681 | orchestrator | Monday 29 September 2025 06:27:57 +0000 (0:00:22.393) 0:02:32.621 ****** 2025-09-29 06:34:14.035690 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035700 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035709 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.035719 | orchestrator | 2025-09-29 06:34:14.035728 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-29 06:34:14.035738 | orchestrator | Monday 29 September 2025 06:28:11 +0000 (0:00:13.906) 0:02:46.527 ****** 2025-09-29 06:34:14.035747 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.035763 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035772 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035782 | orchestrator | 2025-09-29 06:34:14.035791 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-29 06:34:14.035801 | orchestrator | Monday 29 September 2025 06:28:12 +0000 (0:00:01.094) 0:02:47.622 ****** 2025-09-29 06:34:14.035810 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035820 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035829 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.035839 | orchestrator | 2025-09-29 06:34:14.035848 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-29 06:34:14.035858 | orchestrator | Monday 29 September 2025 06:28:26 +0000 (0:00:13.615) 0:03:01.238 ****** 2025-09-29 06:34:14.035867 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.035877 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035887 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035896 | orchestrator | 2025-09-29 06:34:14.035906 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-29 06:34:14.035916 | orchestrator | Monday 29 September 2025 06:28:27 +0000 (0:00:01.018) 0:03:02.256 ****** 2025-09-29 06:34:14.035925 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.035935 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.035944 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.035953 | orchestrator | 2025-09-29 06:34:14.035963 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-29 06:34:14.035972 | orchestrator | 2025-09-29 06:34:14.035982 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-29 06:34:14.035991 | orchestrator | Monday 29 September 2025 06:28:27 +0000 (0:00:00.483) 0:03:02.739 ****** 2025-09-29 06:34:14.036001 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.036011 | orchestrator | 2025-09-29 06:34:14.036021 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-29 06:34:14.036030 | orchestrator | Monday 29 September 2025 06:28:28 +0000 (0:00:00.537) 0:03:03.277 ****** 2025-09-29 06:34:14.036040 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-29 06:34:14.036049 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-29 06:34:14.036059 | orchestrator | 2025-09-29 06:34:14.036069 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-29 06:34:14.036078 | orchestrator | Monday 29 September 2025 06:28:31 +0000 (0:00:03.437) 0:03:06.715 ****** 2025-09-29 06:34:14.036088 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-29 06:34:14.036098 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-29 06:34:14.036107 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-29 06:34:14.036117 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-29 06:34:14.036127 | orchestrator | 2025-09-29 06:34:14.036136 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-29 06:34:14.036152 | orchestrator | Monday 29 September 2025 06:28:38 +0000 (0:00:06.505) 0:03:13.220 ****** 2025-09-29 06:34:14.036162 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-29 06:34:14.036171 | orchestrator | 2025-09-29 06:34:14.036181 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-29 06:34:14.036190 | orchestrator | Monday 29 September 2025 06:28:41 +0000 (0:00:03.300) 0:03:16.520 ****** 2025-09-29 06:34:14.036201 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-29 06:34:14.036210 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-29 06:34:14.036220 | orchestrator | 2025-09-29 06:34:14.036230 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-29 06:34:14.036239 | orchestrator | Monday 29 September 2025 06:28:45 +0000 (0:00:04.164) 0:03:20.685 ****** 2025-09-29 06:34:14.036249 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-29 06:34:14.036259 | orchestrator | 2025-09-29 06:34:14.036325 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-29 06:34:14.036338 | orchestrator | Monday 29 September 2025 06:28:49 +0000 (0:00:03.773) 0:03:24.458 ****** 2025-09-29 06:34:14.036348 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-29 06:34:14.036357 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-29 06:34:14.036367 | orchestrator | 2025-09-29 06:34:14.036376 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-29 06:34:14.036386 | orchestrator | Monday 29 September 2025 06:28:58 +0000 (0:00:08.391) 0:03:32.850 ****** 2025-09-29 06:34:14.036412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036582 | orchestrator | 2025-09-29 06:34:14.036597 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-29 06:34:14.036612 | orchestrator | Monday 29 September 2025 06:28:59 +0000 (0:00:01.283) 0:03:34.134 ****** 2025-09-29 06:34:14.036626 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.036641 | orchestrator | 2025-09-29 06:34:14.036657 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-29 06:34:14.036675 | orchestrator | Monday 29 September 2025 06:28:59 +0000 (0:00:00.130) 0:03:34.264 ****** 2025-09-29 06:34:14.036692 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.036709 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.036726 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.036736 | orchestrator | 2025-09-29 06:34:14.036746 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-29 06:34:14.036766 | orchestrator | Monday 29 September 2025 06:28:59 +0000 (0:00:00.308) 0:03:34.573 ****** 2025-09-29 06:34:14.036776 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-29 06:34:14.036785 | orchestrator | 2025-09-29 06:34:14.036795 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-29 06:34:14.036804 | orchestrator | Monday 29 September 2025 06:29:00 +0000 (0:00:00.872) 0:03:35.445 ****** 2025-09-29 06:34:14.036814 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.036823 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.036833 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.036842 | orchestrator | 2025-09-29 06:34:14.036850 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-29 06:34:14.036858 | orchestrator | Monday 29 September 2025 06:29:00 +0000 (0:00:00.277) 0:03:35.722 ****** 2025-09-29 06:34:14.036866 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.036874 | orchestrator | 2025-09-29 06:34:14.036882 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-29 06:34:14.036890 | orchestrator | Monday 29 September 2025 06:29:01 +0000 (0:00:00.532) 0:03:36.255 ****** 2025-09-29 06:34:14.036900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.036949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.036974 | orchestrator | 2025-09-29 06:34:14.036982 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-29 06:34:14.036990 | orchestrator | Monday 29 September 2025 06:29:03 +0000 (0:00:02.453) 0:03:38.708 ****** 2025-09-29 06:34:14.037009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037032 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.037041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037058 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.037078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037101 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.037109 | orchestrator | 2025-09-29 06:34:14.037117 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-29 06:34:14.037125 | orchestrator | Monday 29 September 2025 06:29:04 +0000 (0:00:00.816) 0:03:39.525 ****** 2025-09-29 06:34:14.037133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037150 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.037169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037193 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.037201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037218 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.037226 | orchestrator | 2025-09-29 06:34:14.037234 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-29 06:34:14.037242 | orchestrator | Monday 29 September 2025 06:29:05 +0000 (0:00:00.776) 0:03:40.302 ****** 2025-09-29 06:34:14.037257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037356 | orchestrator | 2025-09-29 06:34:14.037364 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-29 06:34:14.037379 | orchestrator | Monday 29 September 2025 06:29:07 +0000 (0:00:02.309) 0:03:42.612 ****** 2025-09-29 06:34:14.037389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037458 | orchestrator | 2025-09-29 06:34:14.037466 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-29 06:34:14.037474 | orchestrator | Monday 29 September 2025 06:29:13 +0000 (0:00:05.718) 0:03:48.330 ****** 2025-09-29 06:34:14.037482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037505 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.037524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037542 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.037550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-29 06:34:14.037559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.037567 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.037580 | orchestrator | 2025-09-29 06:34:14.037588 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-29 06:34:14.037596 | orchestrator | Monday 29 September 2025 06:29:14 +0000 (0:00:00.606) 0:03:48.936 ****** 2025-09-29 06:34:14.037608 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.037620 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.037634 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.037646 | orchestrator | 2025-09-29 06:34:14.037659 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-29 06:34:14.037672 | orchestrator | Monday 29 September 2025 06:29:15 +0000 (0:00:01.507) 0:03:50.444 ****** 2025-09-29 06:34:14.037685 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.037706 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.037719 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.037731 | orchestrator | 2025-09-29 06:34:14.037740 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-29 06:34:14.037747 | orchestrator | Monday 29 September 2025 06:29:16 +0000 (0:00:00.338) 0:03:50.783 ****** 2025-09-29 06:34:14.037761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-29 06:34:14.037802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.037831 | orchestrator | 2025-09-29 06:34:14.037840 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-29 06:34:14.037848 | orchestrator | Monday 29 September 2025 06:29:18 +0000 (0:00:02.095) 0:03:52.878 ****** 2025-09-29 06:34:14.037856 | orchestrator | 2025-09-29 06:34:14.037865 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-29 06:34:14.037872 | orchestrator | Monday 29 September 2025 06:29:18 +0000 (0:00:00.131) 0:03:53.010 ****** 2025-09-29 06:34:14.037880 | orchestrator | 2025-09-29 06:34:14.037888 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-29 06:34:14.037896 | orchestrator | Monday 29 September 2025 06:29:18 +0000 (0:00:00.131) 0:03:53.142 ****** 2025-09-29 06:34:14.037904 | orchestrator | 2025-09-29 06:34:14.037912 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-29 06:34:14.037920 | orchestrator | Monday 29 September 2025 06:29:18 +0000 (0:00:00.136) 0:03:53.278 ****** 2025-09-29 06:34:14.037928 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.037935 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.037943 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.037951 | orchestrator | 2025-09-29 06:34:14.037959 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-29 06:34:14.037966 | orchestrator | Monday 29 September 2025 06:29:40 +0000 (0:00:22.411) 0:04:15.690 ****** 2025-09-29 06:34:14.037980 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.037988 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.037995 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.038003 | orchestrator | 2025-09-29 06:34:14.038011 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-29 06:34:14.038070 | orchestrator | 2025-09-29 06:34:14.038078 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-29 06:34:14.038086 | orchestrator | Monday 29 September 2025 06:29:51 +0000 (0:00:10.650) 0:04:26.340 ****** 2025-09-29 06:34:14.038094 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.038103 | orchestrator | 2025-09-29 06:34:14.038110 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-29 06:34:14.038118 | orchestrator | Monday 29 September 2025 06:29:52 +0000 (0:00:01.016) 0:04:27.356 ****** 2025-09-29 06:34:14.038126 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.038134 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.038142 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.038150 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.038158 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.038165 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.038173 | orchestrator | 2025-09-29 06:34:14.038181 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-29 06:34:14.038189 | orchestrator | Monday 29 September 2025 06:29:53 +0000 (0:00:00.465) 0:04:27.821 ****** 2025-09-29 06:34:14.038197 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.038205 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.038213 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.038220 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:34:14.038228 | orchestrator | 2025-09-29 06:34:14.038236 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-29 06:34:14.038247 | orchestrator | Monday 29 September 2025 06:29:53 +0000 (0:00:00.797) 0:04:28.619 ****** 2025-09-29 06:34:14.038261 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-29 06:34:14.038292 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-29 06:34:14.038303 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-29 06:34:14.038315 | orchestrator | 2025-09-29 06:34:14.038335 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-29 06:34:14.038347 | orchestrator | Monday 29 September 2025 06:29:54 +0000 (0:00:00.673) 0:04:29.292 ****** 2025-09-29 06:34:14.038359 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-29 06:34:14.038372 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-29 06:34:14.038386 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-29 06:34:14.038400 | orchestrator | 2025-09-29 06:34:14.038413 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-29 06:34:14.038424 | orchestrator | Monday 29 September 2025 06:29:55 +0000 (0:00:01.148) 0:04:30.441 ****** 2025-09-29 06:34:14.038432 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-29 06:34:14.038440 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.038454 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-29 06:34:14.038462 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.038470 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-29 06:34:14.038478 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.038485 | orchestrator | 2025-09-29 06:34:14.038494 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-29 06:34:14.038501 | orchestrator | Monday 29 September 2025 06:29:56 +0000 (0:00:00.705) 0:04:31.147 ****** 2025-09-29 06:34:14.038509 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:34:14.038525 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:34:14.038533 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.038541 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:34:14.038549 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:34:14.038557 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.038565 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-29 06:34:14.038572 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-29 06:34:14.038580 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.038588 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-29 06:34:14.038596 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-29 06:34:14.038604 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-29 06:34:14.038611 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-29 06:34:14.038619 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-29 06:34:14.038627 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-29 06:34:14.038635 | orchestrator | 2025-09-29 06:34:14.038643 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-29 06:34:14.038651 | orchestrator | Monday 29 September 2025 06:29:57 +0000 (0:00:01.005) 0:04:32.153 ****** 2025-09-29 06:34:14.038658 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.038666 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.038674 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.038682 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.038689 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.038697 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.038705 | orchestrator | 2025-09-29 06:34:14.038713 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-29 06:34:14.038720 | orchestrator | Monday 29 September 2025 06:29:58 +0000 (0:00:01.357) 0:04:33.511 ****** 2025-09-29 06:34:14.038728 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.038736 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.038743 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.038751 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.038759 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.038766 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.038774 | orchestrator | 2025-09-29 06:34:14.038782 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-29 06:34:14.038790 | orchestrator | Monday 29 September 2025 06:30:00 +0000 (0:00:01.799) 0:04:35.310 ****** 2025-09-29 06:34:14.038799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038933 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038963 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.038990 | orchestrator | 2025-09-29 06:34:14.038998 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-29 06:34:14.039005 | orchestrator | Monday 29 September 2025 06:30:02 +0000 (0:00:02.330) 0:04:37.641 ****** 2025-09-29 06:34:14.039014 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:34:14.039023 | orchestrator | 2025-09-29 06:34:14.039031 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-29 06:34:14.039039 | orchestrator | Monday 29 September 2025 06:30:04 +0000 (0:00:01.182) 0:04:38.823 ****** 2025-09-29 06:34:14.039047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039064 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039193 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.039228 | orchestrator | 2025-09-29 06:34:14.039241 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-29 06:34:14.039252 | orchestrator | Monday 29 September 2025 06:30:07 +0000 (0:00:03.631) 0:04:42.455 ****** 2025-09-29 06:34:14.039459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039506 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039520 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.039527 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039555 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.039566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039587 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.039594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039624 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.039635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039654 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.039661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039675 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.039682 | orchestrator | 2025-09-29 06:34:14.039689 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-29 06:34:14.039696 | orchestrator | Monday 29 September 2025 06:30:09 +0000 (0:00:01.287) 0:04:43.742 ****** 2025-09-29 06:34:14.039703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039733 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.039743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039772 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.039779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.039786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.039801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039808 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.039815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039834 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.039841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039855 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.039862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.039874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.039881 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.039888 | orchestrator | 2025-09-29 06:34:14.039895 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-29 06:34:14.039902 | orchestrator | Monday 29 September 2025 06:30:10 +0000 (0:00:01.817) 0:04:45.560 ****** 2025-09-29 06:34:14.039909 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.039915 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.039922 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.039931 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-29 06:34:14.039939 | orchestrator | 2025-09-29 06:34:14.039945 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-29 06:34:14.039952 | orchestrator | Monday 29 September 2025 06:30:11 +0000 (0:00:00.995) 0:04:46.555 ****** 2025-09-29 06:34:14.039959 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 06:34:14.039966 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-29 06:34:14.039972 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-29 06:34:14.039979 | orchestrator | 2025-09-29 06:34:14.039985 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-29 06:34:14.039992 | orchestrator | Monday 29 September 2025 06:30:12 +0000 (0:00:00.907) 0:04:47.463 ****** 2025-09-29 06:34:14.040003 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 06:34:14.040010 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-29 06:34:14.040017 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-29 06:34:14.040023 | orchestrator | 2025-09-29 06:34:14.040029 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-29 06:34:14.040036 | orchestrator | Monday 29 September 2025 06:30:13 +0000 (0:00:00.898) 0:04:48.361 ****** 2025-09-29 06:34:14.040043 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:34:14.040050 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:34:14.040056 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:34:14.040063 | orchestrator | 2025-09-29 06:34:14.040070 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-29 06:34:14.040076 | orchestrator | Monday 29 September 2025 06:30:14 +0000 (0:00:00.506) 0:04:48.868 ****** 2025-09-29 06:34:14.040083 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:34:14.040089 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:34:14.040096 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:34:14.040102 | orchestrator | 2025-09-29 06:34:14.040109 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-29 06:34:14.040116 | orchestrator | Monday 29 September 2025 06:30:14 +0000 (0:00:00.741) 0:04:49.609 ****** 2025-09-29 06:34:14.040122 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-29 06:34:14.040129 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-29 06:34:14.040135 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-29 06:34:14.040142 | orchestrator | 2025-09-29 06:34:14.040148 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-29 06:34:14.040155 | orchestrator | Monday 29 September 2025 06:30:16 +0000 (0:00:01.176) 0:04:50.785 ****** 2025-09-29 06:34:14.040162 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-29 06:34:14.040169 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-29 06:34:14.040175 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-29 06:34:14.040182 | orchestrator | 2025-09-29 06:34:14.040189 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-29 06:34:14.040195 | orchestrator | Monday 29 September 2025 06:30:17 +0000 (0:00:01.168) 0:04:51.953 ****** 2025-09-29 06:34:14.040202 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-29 06:34:14.040208 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-29 06:34:14.040215 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-29 06:34:14.040222 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-29 06:34:14.040228 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-29 06:34:14.040235 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-29 06:34:14.040241 | orchestrator | 2025-09-29 06:34:14.040248 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-29 06:34:14.040254 | orchestrator | Monday 29 September 2025 06:30:20 +0000 (0:00:03.731) 0:04:55.685 ****** 2025-09-29 06:34:14.040261 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.040299 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.040306 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.040313 | orchestrator | 2025-09-29 06:34:14.040320 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-29 06:34:14.040326 | orchestrator | Monday 29 September 2025 06:30:21 +0000 (0:00:00.505) 0:04:56.191 ****** 2025-09-29 06:34:14.040333 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.040340 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.040346 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.040353 | orchestrator | 2025-09-29 06:34:14.040360 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-29 06:34:14.040366 | orchestrator | Monday 29 September 2025 06:30:21 +0000 (0:00:00.300) 0:04:56.492 ****** 2025-09-29 06:34:14.040373 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.040389 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.040396 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.040402 | orchestrator | 2025-09-29 06:34:14.040409 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-29 06:34:14.040416 | orchestrator | Monday 29 September 2025 06:30:22 +0000 (0:00:01.203) 0:04:57.695 ****** 2025-09-29 06:34:14.040428 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-29 06:34:14.040436 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-29 06:34:14.040443 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-29 06:34:14.040449 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-29 06:34:14.040460 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-29 06:34:14.040467 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-29 06:34:14.040474 | orchestrator | 2025-09-29 06:34:14.040480 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-29 06:34:14.040487 | orchestrator | Monday 29 September 2025 06:30:26 +0000 (0:00:03.219) 0:05:00.915 ****** 2025-09-29 06:34:14.040494 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:34:14.040500 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:34:14.040507 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:34:14.040514 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-29 06:34:14.040521 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.040527 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-29 06:34:14.040534 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.040541 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-29 06:34:14.040547 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.040554 | orchestrator | 2025-09-29 06:34:14.040561 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-29 06:34:14.040567 | orchestrator | Monday 29 September 2025 06:30:29 +0000 (0:00:03.063) 0:05:03.978 ****** 2025-09-29 06:34:14.040574 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.040581 | orchestrator | 2025-09-29 06:34:14.040587 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-29 06:34:14.040594 | orchestrator | Monday 29 September 2025 06:30:29 +0000 (0:00:00.134) 0:05:04.113 ****** 2025-09-29 06:34:14.040601 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.040607 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.040614 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.040620 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.040627 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.040634 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.040640 | orchestrator | 2025-09-29 06:34:14.040647 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-29 06:34:14.040654 | orchestrator | Monday 29 September 2025 06:30:29 +0000 (0:00:00.568) 0:05:04.681 ****** 2025-09-29 06:34:14.040661 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-29 06:34:14.040667 | orchestrator | 2025-09-29 06:34:14.040674 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-29 06:34:14.040681 | orchestrator | Monday 29 September 2025 06:30:30 +0000 (0:00:00.756) 0:05:05.437 ****** 2025-09-29 06:34:14.040687 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.040704 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.040711 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.040723 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.040729 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.040744 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.040751 | orchestrator | 2025-09-29 06:34:14.040757 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-29 06:34:14.040764 | orchestrator | Monday 29 September 2025 06:30:31 +0000 (0:00:00.791) 0:05:06.228 ****** 2025-09-29 06:34:14.040771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040892 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.040915 | orchestrator | 2025-09-29 06:34:14.040922 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-29 06:34:14.040928 | orchestrator | Monday 29 September 2025 06:30:35 +0000 (0:00:03.717) 0:05:09.945 ****** 2025-09-29 06:34:14.040936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.040946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.040954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.040961 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.040974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/l2025-09-29 06:34:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:14.041054 | orchestrator | ocaltime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.041064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.041071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041095 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.041159 | orchestrator | 2025-09-29 06:34:14.041166 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-29 06:34:14.041173 | orchestrator | Monday 29 September 2025 06:30:41 +0000 (0:00:05.974) 0:05:15.919 ****** 2025-09-29 06:34:14.041179 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.041186 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.041193 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.041199 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041206 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041213 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041219 | orchestrator | 2025-09-29 06:34:14.041226 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-29 06:34:14.041233 | orchestrator | Monday 29 September 2025 06:30:42 +0000 (0:00:01.208) 0:05:17.128 ****** 2025-09-29 06:34:14.041239 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-29 06:34:14.041246 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-29 06:34:14.041252 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-29 06:34:14.041259 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-29 06:34:14.041285 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041292 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-29 06:34:14.041303 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-29 06:34:14.041310 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041317 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-29 06:34:14.041323 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-29 06:34:14.041330 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041337 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-29 06:34:14.041343 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-29 06:34:14.041358 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-29 06:34:14.041365 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-29 06:34:14.041372 | orchestrator | 2025-09-29 06:34:14.041378 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-29 06:34:14.041385 | orchestrator | Monday 29 September 2025 06:30:45 +0000 (0:00:03.399) 0:05:20.528 ****** 2025-09-29 06:34:14.041392 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.041399 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.041405 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.041412 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041418 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041425 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041432 | orchestrator | 2025-09-29 06:34:14.041438 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-29 06:34:14.041445 | orchestrator | Monday 29 September 2025 06:30:46 +0000 (0:00:00.539) 0:05:21.067 ****** 2025-09-29 06:34:14.041452 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-29 06:34:14.041459 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-29 06:34:14.041466 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-29 06:34:14.041472 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-29 06:34:14.041479 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041486 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041492 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041499 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-29 06:34:14.041505 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-29 06:34:14.041512 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041519 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041525 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041532 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-29 06:34:14.041545 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041552 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041559 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041565 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041572 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041579 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041585 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-29 06:34:14.041597 | orchestrator | 2025-09-29 06:34:14.041604 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-29 06:34:14.041611 | orchestrator | Monday 29 September 2025 06:30:51 +0000 (0:00:04.734) 0:05:25.801 ****** 2025-09-29 06:34:14.041618 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:34:14.041624 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:34:14.041631 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-29 06:34:14.041638 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-29 06:34:14.041648 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:34:14.041655 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:34:14.041662 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-29 06:34:14.041668 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-29 06:34:14.041675 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-29 06:34:14.041685 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:34:14.041692 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:34:14.041699 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-29 06:34:14.041706 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-29 06:34:14.041712 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041719 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-29 06:34:14.041725 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041732 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-29 06:34:14.041739 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041746 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:34:14.041752 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:34:14.041759 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-29 06:34:14.041766 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:34:14.041772 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:34:14.041779 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-29 06:34:14.041785 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:34:14.041792 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:34:14.041799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-29 06:34:14.041805 | orchestrator | 2025-09-29 06:34:14.041812 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-29 06:34:14.041818 | orchestrator | Monday 29 September 2025 06:30:57 +0000 (0:00:06.090) 0:05:31.892 ****** 2025-09-29 06:34:14.041825 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.041832 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.041838 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.041845 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041851 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041858 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041864 | orchestrator | 2025-09-29 06:34:14.041871 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-29 06:34:14.041883 | orchestrator | Monday 29 September 2025 06:30:57 +0000 (0:00:00.615) 0:05:32.507 ****** 2025-09-29 06:34:14.041889 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.041896 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.041902 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.041909 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041915 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041922 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041929 | orchestrator | 2025-09-29 06:34:14.041936 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-29 06:34:14.041942 | orchestrator | Monday 29 September 2025 06:30:58 +0000 (0:00:00.531) 0:05:33.039 ****** 2025-09-29 06:34:14.041949 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.041956 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.041962 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.041969 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.041975 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.041982 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.041989 | orchestrator | 2025-09-29 06:34:14.041995 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-29 06:34:14.042002 | orchestrator | Monday 29 September 2025 06:31:00 +0000 (0:00:01.838) 0:05:34.877 ****** 2025-09-29 06:34:14.042009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.042054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.042063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042070 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.042077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.042089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.042096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042103 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.042114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-29 06:34:14.042124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-29 06:34:14.042131 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042143 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.042150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.042157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042163 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.042170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.042182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042189 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.042200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-29 06:34:14.042207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-29 06:34:14.042218 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.042225 | orchestrator | 2025-09-29 06:34:14.042232 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-29 06:34:14.042238 | orchestrator | Monday 29 September 2025 06:31:01 +0000 (0:00:01.400) 0:05:36.277 ****** 2025-09-29 06:34:14.042245 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-29 06:34:14.042252 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042258 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.042279 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-29 06:34:14.042286 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042293 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.042300 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-29 06:34:14.042306 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042313 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.042320 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-29 06:34:14.042326 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042333 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.042340 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-29 06:34:14.042346 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042353 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.042360 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-29 06:34:14.042366 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-29 06:34:14.042373 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.042380 | orchestrator | 2025-09-29 06:34:14.042386 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-29 06:34:14.042393 | orchestrator | Monday 29 September 2025 06:31:02 +0000 (0:00:00.710) 0:05:36.988 ****** 2025-09-29 06:34:14.042400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042452 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042480 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-29 06:34:14.042547 | orchestrator | 2025-09-29 06:34:14.042553 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-29 06:34:14.042560 | orchestrator | Monday 29 September 2025 06:31:04 +0000 (0:00:02.581) 0:05:39.569 ****** 2025-09-29 06:34:14.042567 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.042574 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.042581 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.042587 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.042594 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.042601 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.042607 | orchestrator | 2025-09-29 06:34:14.042614 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042621 | orchestrator | Monday 29 September 2025 06:31:05 +0000 (0:00:00.819) 0:05:40.389 ****** 2025-09-29 06:34:14.042627 | orchestrator | 2025-09-29 06:34:14.042634 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042641 | orchestrator | Monday 29 September 2025 06:31:05 +0000 (0:00:00.135) 0:05:40.524 ****** 2025-09-29 06:34:14.042647 | orchestrator | 2025-09-29 06:34:14.042654 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042661 | orchestrator | Monday 29 September 2025 06:31:05 +0000 (0:00:00.131) 0:05:40.656 ****** 2025-09-29 06:34:14.042668 | orchestrator | 2025-09-29 06:34:14.042674 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042681 | orchestrator | Monday 29 September 2025 06:31:06 +0000 (0:00:00.141) 0:05:40.797 ****** 2025-09-29 06:34:14.042687 | orchestrator | 2025-09-29 06:34:14.042694 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042701 | orchestrator | Monday 29 September 2025 06:31:06 +0000 (0:00:00.154) 0:05:40.951 ****** 2025-09-29 06:34:14.042707 | orchestrator | 2025-09-29 06:34:14.042714 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-29 06:34:14.042720 | orchestrator | Monday 29 September 2025 06:31:06 +0000 (0:00:00.135) 0:05:41.087 ****** 2025-09-29 06:34:14.042727 | orchestrator | 2025-09-29 06:34:14.042734 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-29 06:34:14.042740 | orchestrator | Monday 29 September 2025 06:31:06 +0000 (0:00:00.303) 0:05:41.390 ****** 2025-09-29 06:34:14.042747 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.042754 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.042760 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.042767 | orchestrator | 2025-09-29 06:34:14.042774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-29 06:34:14.042780 | orchestrator | Monday 29 September 2025 06:31:18 +0000 (0:00:11.907) 0:05:53.298 ****** 2025-09-29 06:34:14.042787 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.042794 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.042800 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.042807 | orchestrator | 2025-09-29 06:34:14.042814 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-29 06:34:14.042820 | orchestrator | Monday 29 September 2025 06:31:32 +0000 (0:00:14.230) 0:06:07.529 ****** 2025-09-29 06:34:14.042827 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.042838 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.042845 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.042851 | orchestrator | 2025-09-29 06:34:14.042858 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-29 06:34:14.042865 | orchestrator | Monday 29 September 2025 06:31:55 +0000 (0:00:23.010) 0:06:30.540 ****** 2025-09-29 06:34:14.042871 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.042878 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.042885 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.042891 | orchestrator | 2025-09-29 06:34:14.042898 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-29 06:34:14.042904 | orchestrator | Monday 29 September 2025 06:32:30 +0000 (0:00:35.199) 0:07:05.740 ****** 2025-09-29 06:34:14.042911 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-09-29 06:34:14.042918 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-09-29 06:34:14.042924 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-09-29 06:34:14.042931 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.042938 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.042944 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.042951 | orchestrator | 2025-09-29 06:34:14.042958 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-29 06:34:14.042964 | orchestrator | Monday 29 September 2025 06:32:37 +0000 (0:00:06.272) 0:07:12.012 ****** 2025-09-29 06:34:14.042971 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.042978 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.042984 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.042991 | orchestrator | 2025-09-29 06:34:14.043001 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-29 06:34:14.043008 | orchestrator | Monday 29 September 2025 06:32:38 +0000 (0:00:00.814) 0:07:12.827 ****** 2025-09-29 06:34:14.043015 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:34:14.043022 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:34:14.043028 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:34:14.043035 | orchestrator | 2025-09-29 06:34:14.043042 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-29 06:34:14.043048 | orchestrator | Monday 29 September 2025 06:33:02 +0000 (0:00:24.418) 0:07:37.245 ****** 2025-09-29 06:34:14.043055 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.043062 | orchestrator | 2025-09-29 06:34:14.043072 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-29 06:34:14.043079 | orchestrator | Monday 29 September 2025 06:33:02 +0000 (0:00:00.109) 0:07:37.354 ****** 2025-09-29 06:34:14.043085 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.043092 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043099 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043105 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.043112 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043118 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-29 06:34:14.043125 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:34:14.043132 | orchestrator | 2025-09-29 06:34:14.043138 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-29 06:34:14.043145 | orchestrator | Monday 29 September 2025 06:33:25 +0000 (0:00:22.525) 0:07:59.880 ****** 2025-09-29 06:34:14.043152 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.043158 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043165 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043171 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.043178 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.043189 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043196 | orchestrator | 2025-09-29 06:34:14.043203 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-29 06:34:14.043209 | orchestrator | Monday 29 September 2025 06:33:32 +0000 (0:00:07.827) 0:08:07.707 ****** 2025-09-29 06:34:14.043216 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.043222 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.043229 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043236 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043242 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043249 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-09-29 06:34:14.043256 | orchestrator | 2025-09-29 06:34:14.043262 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-29 06:34:14.043289 | orchestrator | Monday 29 September 2025 06:33:36 +0000 (0:00:03.156) 0:08:10.863 ****** 2025-09-29 06:34:14.043297 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:34:14.043303 | orchestrator | 2025-09-29 06:34:14.043310 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-29 06:34:14.043317 | orchestrator | Monday 29 September 2025 06:33:50 +0000 (0:00:14.018) 0:08:24.882 ****** 2025-09-29 06:34:14.043323 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:34:14.043330 | orchestrator | 2025-09-29 06:34:14.043337 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-29 06:34:14.043344 | orchestrator | Monday 29 September 2025 06:33:51 +0000 (0:00:01.241) 0:08:26.124 ****** 2025-09-29 06:34:14.043350 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.043357 | orchestrator | 2025-09-29 06:34:14.043364 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-29 06:34:14.043370 | orchestrator | Monday 29 September 2025 06:33:52 +0000 (0:00:01.170) 0:08:27.294 ****** 2025-09-29 06:34:14.043377 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:34:14.043383 | orchestrator | 2025-09-29 06:34:14.043390 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-29 06:34:14.043396 | orchestrator | Monday 29 September 2025 06:34:04 +0000 (0:00:12.204) 0:08:39.499 ****** 2025-09-29 06:34:14.043403 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:34:14.043410 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:34:14.043417 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:34:14.043423 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:34:14.043430 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:34:14.043437 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:34:14.043443 | orchestrator | 2025-09-29 06:34:14.043450 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-29 06:34:14.043457 | orchestrator | 2025-09-29 06:34:14.043463 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-29 06:34:14.043470 | orchestrator | Monday 29 September 2025 06:34:06 +0000 (0:00:01.747) 0:08:41.246 ****** 2025-09-29 06:34:14.043476 | orchestrator | changed: [testbed-node-1] 2025-09-29 06:34:14.043483 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:34:14.043490 | orchestrator | changed: [testbed-node-2] 2025-09-29 06:34:14.043496 | orchestrator | 2025-09-29 06:34:14.043503 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-29 06:34:14.043509 | orchestrator | 2025-09-29 06:34:14.043516 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-29 06:34:14.043522 | orchestrator | Monday 29 September 2025 06:34:07 +0000 (0:00:01.096) 0:08:42.342 ****** 2025-09-29 06:34:14.043529 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043535 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043542 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043549 | orchestrator | 2025-09-29 06:34:14.043555 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-29 06:34:14.043562 | orchestrator | 2025-09-29 06:34:14.043569 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-29 06:34:14.043580 | orchestrator | Monday 29 September 2025 06:34:08 +0000 (0:00:00.519) 0:08:42.861 ****** 2025-09-29 06:34:14.043591 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-29 06:34:14.043598 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-29 06:34:14.043605 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043611 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-29 06:34:14.043618 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-29 06:34:14.043625 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043631 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:34:14.043638 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-29 06:34:14.043648 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-29 06:34:14.043655 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043661 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-29 06:34:14.043668 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-29 06:34:14.043675 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043681 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:34:14.043688 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-29 06:34:14.043694 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-29 06:34:14.043701 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043707 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-29 06:34:14.043714 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-29 06:34:14.043721 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043727 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:34:14.043734 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-29 06:34:14.043740 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-29 06:34:14.043747 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043753 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-29 06:34:14.043760 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-29 06:34:14.043767 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043773 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043780 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-29 06:34:14.043787 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-29 06:34:14.043793 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043800 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-29 06:34:14.043806 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-29 06:34:14.043813 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043820 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043826 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-29 06:34:14.043833 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-29 06:34:14.043840 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-29 06:34:14.043847 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-29 06:34:14.043853 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-29 06:34:14.043860 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-29 06:34:14.043867 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043873 | orchestrator | 2025-09-29 06:34:14.043880 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-29 06:34:14.043891 | orchestrator | 2025-09-29 06:34:14.043898 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-29 06:34:14.043905 | orchestrator | Monday 29 September 2025 06:34:09 +0000 (0:00:01.314) 0:08:44.176 ****** 2025-09-29 06:34:14.043911 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-29 06:34:14.043918 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-29 06:34:14.043925 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.043932 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-29 06:34:14.043938 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-29 06:34:14.043945 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.043951 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-29 06:34:14.043959 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-29 06:34:14.043965 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.043972 | orchestrator | 2025-09-29 06:34:14.043979 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-29 06:34:14.043985 | orchestrator | 2025-09-29 06:34:14.043992 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-29 06:34:14.043999 | orchestrator | Monday 29 September 2025 06:34:10 +0000 (0:00:00.719) 0:08:44.895 ****** 2025-09-29 06:34:14.044005 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.044012 | orchestrator | 2025-09-29 06:34:14.044018 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-29 06:34:14.044025 | orchestrator | 2025-09-29 06:34:14.044032 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-29 06:34:14.044038 | orchestrator | Monday 29 September 2025 06:34:10 +0000 (0:00:00.666) 0:08:45.562 ****** 2025-09-29 06:34:14.044045 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:34:14.044052 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:34:14.044058 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:34:14.044065 | orchestrator | 2025-09-29 06:34:14.044072 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:34:14.044082 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:34:14.044090 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-29 06:34:14.044097 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-29 06:34:14.044108 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-29 06:34:14.044115 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-29 06:34:14.044122 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-09-29 06:34:14.044128 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-29 06:34:14.044135 | orchestrator | 2025-09-29 06:34:14.044141 | orchestrator | 2025-09-29 06:34:14.044148 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:34:14.044155 | orchestrator | Monday 29 September 2025 06:34:11 +0000 (0:00:00.418) 0:08:45.981 ****** 2025-09-29 06:34:14.044162 | orchestrator | =============================================================================== 2025-09-29 06:34:14.044168 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 35.20s 2025-09-29 06:34:14.044175 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.82s 2025-09-29 06:34:14.044186 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.42s 2025-09-29 06:34:14.044192 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.01s 2025-09-29 06:34:14.044199 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.53s 2025-09-29 06:34:14.044206 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.41s 2025-09-29 06:34:14.044212 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.39s 2025-09-29 06:34:14.044219 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 22.06s 2025-09-29 06:34:14.044225 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.18s 2025-09-29 06:34:14.044232 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.18s 2025-09-29 06:34:14.044239 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.23s 2025-09-29 06:34:14.044245 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.02s 2025-09-29 06:34:14.044252 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.91s 2025-09-29 06:34:14.044258 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.62s 2025-09-29 06:34:14.044281 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.20s 2025-09-29 06:34:14.044288 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.91s 2025-09-29 06:34:14.044294 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.65s 2025-09-29 06:34:14.044301 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.75s 2025-09-29 06:34:14.044308 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.39s 2025-09-29 06:34:14.044315 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.83s 2025-09-29 06:34:17.070635 | orchestrator | 2025-09-29 06:34:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:20.113741 | orchestrator | 2025-09-29 06:34:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:23.153472 | orchestrator | 2025-09-29 06:34:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:26.189511 | orchestrator | 2025-09-29 06:34:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:29.220213 | orchestrator | 2025-09-29 06:34:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:32.259051 | orchestrator | 2025-09-29 06:34:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:35.296951 | orchestrator | 2025-09-29 06:34:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:38.332316 | orchestrator | 2025-09-29 06:34:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:41.373602 | orchestrator | 2025-09-29 06:34:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:44.414657 | orchestrator | 2025-09-29 06:34:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:47.456538 | orchestrator | 2025-09-29 06:34:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:50.499546 | orchestrator | 2025-09-29 06:34:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:53.550978 | orchestrator | 2025-09-29 06:34:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:56.594106 | orchestrator | 2025-09-29 06:34:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:34:59.636495 | orchestrator | 2025-09-29 06:34:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:35:02.675105 | orchestrator | 2025-09-29 06:35:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:35:05.717097 | orchestrator | 2025-09-29 06:35:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:35:08.761616 | orchestrator | 2025-09-29 06:35:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:35:11.797844 | orchestrator | 2025-09-29 06:35:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-29 06:35:14.841697 | orchestrator | 2025-09-29 06:35:15.170568 | orchestrator | 2025-09-29 06:35:15.176284 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Sep 29 06:35:15 UTC 2025 2025-09-29 06:35:15.176327 | orchestrator | 2025-09-29 06:35:15.563482 | orchestrator | ok: Runtime: 0:34:05.474555 2025-09-29 06:35:15.822247 | 2025-09-29 06:35:15.822388 | TASK [Bootstrap services] 2025-09-29 06:35:16.528639 | orchestrator | 2025-09-29 06:35:16.528827 | orchestrator | # BOOTSTRAP 2025-09-29 06:35:16.528852 | orchestrator | 2025-09-29 06:35:16.528866 | orchestrator | + set -e 2025-09-29 06:35:16.528880 | orchestrator | + echo 2025-09-29 06:35:16.528894 | orchestrator | + echo '# BOOTSTRAP' 2025-09-29 06:35:16.528912 | orchestrator | + echo 2025-09-29 06:35:16.528955 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-29 06:35:16.540371 | orchestrator | + set -e 2025-09-29 06:35:16.540446 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-29 06:35:21.122448 | orchestrator | 2025-09-29 06:35:21 | INFO  | It takes a moment until task 9cb9317f-c0af-41b3-bcf6-d9911fdece20 (flavor-manager) has been started and output is visible here. 2025-09-29 06:35:29.259133 | orchestrator | 2025-09-29 06:35:24 | INFO  | Flavor SCS-1L-1 created 2025-09-29 06:35:29.259310 | orchestrator | 2025-09-29 06:35:24 | INFO  | Flavor SCS-1L-1-5 created 2025-09-29 06:35:29.259329 | orchestrator | 2025-09-29 06:35:24 | INFO  | Flavor SCS-1V-2 created 2025-09-29 06:35:29.259339 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-1V-2-5 created 2025-09-29 06:35:29.259348 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-1V-4 created 2025-09-29 06:35:29.259357 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-1V-4-10 created 2025-09-29 06:35:29.259366 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-1V-8 created 2025-09-29 06:35:29.259377 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-1V-8-20 created 2025-09-29 06:35:29.259399 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-2V-4 created 2025-09-29 06:35:29.259409 | orchestrator | 2025-09-29 06:35:25 | INFO  | Flavor SCS-2V-4-10 created 2025-09-29 06:35:29.259418 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-2V-8 created 2025-09-29 06:35:29.259427 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-2V-8-20 created 2025-09-29 06:35:29.259435 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-2V-16 created 2025-09-29 06:35:29.259444 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-2V-16-50 created 2025-09-29 06:35:29.259453 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-4V-8 created 2025-09-29 06:35:29.259462 | orchestrator | 2025-09-29 06:35:26 | INFO  | Flavor SCS-4V-8-20 created 2025-09-29 06:35:29.259471 | orchestrator | 2025-09-29 06:35:27 | INFO  | Flavor SCS-4V-16 created 2025-09-29 06:35:29.259479 | orchestrator | 2025-09-29 06:35:27 | INFO  | Flavor SCS-4V-16-50 created 2025-09-29 06:35:29.259488 | orchestrator | 2025-09-29 06:35:27 | INFO  | Flavor SCS-4V-32 created 2025-09-29 06:35:29.259497 | orchestrator | 2025-09-29 06:35:27 | INFO  | Flavor SCS-4V-32-100 created 2025-09-29 06:35:29.259506 | orchestrator | 2025-09-29 06:35:27 | INFO  | Flavor SCS-8V-16 created 2025-09-29 06:35:29.259514 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-8V-16-50 created 2025-09-29 06:35:29.259523 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-8V-32 created 2025-09-29 06:35:29.259532 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-8V-32-100 created 2025-09-29 06:35:29.259541 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-16V-32 created 2025-09-29 06:35:29.259550 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-16V-32-100 created 2025-09-29 06:35:29.259558 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-2V-4-20s created 2025-09-29 06:35:29.259567 | orchestrator | 2025-09-29 06:35:28 | INFO  | Flavor SCS-4V-8-50s created 2025-09-29 06:35:29.259576 | orchestrator | 2025-09-29 06:35:29 | INFO  | Flavor SCS-8V-32-100s created 2025-09-29 06:35:31.423391 | orchestrator | 2025-09-29 06:35:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-29 06:35:41.550621 | orchestrator | 2025-09-29 06:35:41 | INFO  | Task c31ed71d-590c-4b36-8ce0-61225b5e3467 (bootstrap-basic) was prepared for execution. 2025-09-29 06:35:41.550688 | orchestrator | 2025-09-29 06:35:41 | INFO  | It takes a moment until task c31ed71d-590c-4b36-8ce0-61225b5e3467 (bootstrap-basic) has been started and output is visible here. 2025-09-29 06:36:42.008647 | orchestrator | 2025-09-29 06:36:42.008760 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-29 06:36:42.008773 | orchestrator | 2025-09-29 06:36:42.008782 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-29 06:36:42.008792 | orchestrator | Monday 29 September 2025 06:35:45 +0000 (0:00:00.085) 0:00:00.085 ****** 2025-09-29 06:36:42.008801 | orchestrator | ok: [localhost] 2025-09-29 06:36:42.008810 | orchestrator | 2025-09-29 06:36:42.008819 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-29 06:36:42.008827 | orchestrator | Monday 29 September 2025 06:35:46 +0000 (0:00:01.688) 0:00:01.774 ****** 2025-09-29 06:36:42.008835 | orchestrator | ok: [localhost] 2025-09-29 06:36:42.008844 | orchestrator | 2025-09-29 06:36:42.008852 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-29 06:36:42.008860 | orchestrator | Monday 29 September 2025 06:35:54 +0000 (0:00:07.493) 0:00:09.267 ****** 2025-09-29 06:36:42.008868 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.008877 | orchestrator | 2025-09-29 06:36:42.008886 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-29 06:36:42.008895 | orchestrator | Monday 29 September 2025 06:36:01 +0000 (0:00:07.127) 0:00:16.395 ****** 2025-09-29 06:36:42.008903 | orchestrator | ok: [localhost] 2025-09-29 06:36:42.008911 | orchestrator | 2025-09-29 06:36:42.008919 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-29 06:36:42.008928 | orchestrator | Monday 29 September 2025 06:36:08 +0000 (0:00:06.976) 0:00:23.372 ****** 2025-09-29 06:36:42.008940 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.008948 | orchestrator | 2025-09-29 06:36:42.008956 | orchestrator | TASK [Create public network] *************************************************** 2025-09-29 06:36:42.008965 | orchestrator | Monday 29 September 2025 06:36:15 +0000 (0:00:07.257) 0:00:30.629 ****** 2025-09-29 06:36:42.008973 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.008981 | orchestrator | 2025-09-29 06:36:42.008989 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-29 06:36:42.008997 | orchestrator | Monday 29 September 2025 06:36:22 +0000 (0:00:06.980) 0:00:37.610 ****** 2025-09-29 06:36:42.009006 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.009014 | orchestrator | 2025-09-29 06:36:42.009022 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-29 06:36:42.009039 | orchestrator | Monday 29 September 2025 06:36:29 +0000 (0:00:06.363) 0:00:43.974 ****** 2025-09-29 06:36:42.009047 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.009055 | orchestrator | 2025-09-29 06:36:42.009064 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-29 06:36:42.009072 | orchestrator | Monday 29 September 2025 06:36:33 +0000 (0:00:04.381) 0:00:48.355 ****** 2025-09-29 06:36:42.009080 | orchestrator | changed: [localhost] 2025-09-29 06:36:42.009088 | orchestrator | 2025-09-29 06:36:42.009097 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-29 06:36:42.009105 | orchestrator | Monday 29 September 2025 06:36:38 +0000 (0:00:04.814) 0:00:53.169 ****** 2025-09-29 06:36:42.009113 | orchestrator | ok: [localhost] 2025-09-29 06:36:42.009121 | orchestrator | 2025-09-29 06:36:42.009129 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:36:42.009138 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:36:42.009146 | orchestrator | 2025-09-29 06:36:42.009155 | orchestrator | 2025-09-29 06:36:42.009208 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:36:42.009245 | orchestrator | Monday 29 September 2025 06:36:41 +0000 (0:00:03.501) 0:00:56.671 ****** 2025-09-29 06:36:42.009255 | orchestrator | =============================================================================== 2025-09-29 06:36:42.009265 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.49s 2025-09-29 06:36:42.009275 | orchestrator | Create volume type local ------------------------------------------------ 7.26s 2025-09-29 06:36:42.009284 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.13s 2025-09-29 06:36:42.009294 | orchestrator | Create public network --------------------------------------------------- 6.98s 2025-09-29 06:36:42.009303 | orchestrator | Get volume type local --------------------------------------------------- 6.98s 2025-09-29 06:36:42.009313 | orchestrator | Set public network to default ------------------------------------------- 6.36s 2025-09-29 06:36:42.009322 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.81s 2025-09-29 06:36:42.009332 | orchestrator | Create public subnet ---------------------------------------------------- 4.38s 2025-09-29 06:36:42.009341 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2025-09-29 06:36:42.009351 | orchestrator | Gathering Facts --------------------------------------------------------- 1.69s 2025-09-29 06:36:44.276896 | orchestrator | 2025-09-29 06:36:44 | INFO  | It takes a moment until task e3187e4a-5571-4a7d-915d-6c810474fe14 (image-manager) has been started and output is visible here. 2025-09-29 06:37:25.117523 | orchestrator | 2025-09-29 06:36:47 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-29 06:37:25.117711 | orchestrator | 2025-09-29 06:36:47 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-29 06:37:25.117747 | orchestrator | 2025-09-29 06:36:47 | INFO  | Importing image Cirros 0.6.2 2025-09-29 06:37:25.117768 | orchestrator | 2025-09-29 06:36:47 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-29 06:37:25.117788 | orchestrator | 2025-09-29 06:36:49 | INFO  | Waiting for image to leave queued state... 2025-09-29 06:37:25.117808 | orchestrator | 2025-09-29 06:36:51 | INFO  | Waiting for import to complete... 2025-09-29 06:37:25.117829 | orchestrator | 2025-09-29 06:37:01 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-29 06:37:25.117847 | orchestrator | 2025-09-29 06:37:02 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-29 06:37:25.117866 | orchestrator | 2025-09-29 06:37:02 | INFO  | Setting internal_version = 0.6.2 2025-09-29 06:37:25.117885 | orchestrator | 2025-09-29 06:37:02 | INFO  | Setting image_original_user = cirros 2025-09-29 06:37:25.117904 | orchestrator | 2025-09-29 06:37:02 | INFO  | Adding tag os:cirros 2025-09-29 06:37:25.117925 | orchestrator | 2025-09-29 06:37:02 | INFO  | Setting property architecture: x86_64 2025-09-29 06:37:25.117945 | orchestrator | 2025-09-29 06:37:02 | INFO  | Setting property hw_disk_bus: scsi 2025-09-29 06:37:25.117964 | orchestrator | 2025-09-29 06:37:02 | INFO  | Setting property hw_rng_model: virtio 2025-09-29 06:37:25.117982 | orchestrator | 2025-09-29 06:37:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-29 06:37:25.118004 | orchestrator | 2025-09-29 06:37:03 | INFO  | Setting property hw_watchdog_action: reset 2025-09-29 06:37:25.118092 | orchestrator | 2025-09-29 06:37:03 | INFO  | Setting property hypervisor_type: qemu 2025-09-29 06:37:25.118106 | orchestrator | 2025-09-29 06:37:03 | INFO  | Setting property os_distro: cirros 2025-09-29 06:37:25.118119 | orchestrator | 2025-09-29 06:37:03 | INFO  | Setting property os_purpose: minimal 2025-09-29 06:37:25.118132 | orchestrator | 2025-09-29 06:37:04 | INFO  | Setting property replace_frequency: never 2025-09-29 06:37:25.118194 | orchestrator | 2025-09-29 06:37:04 | INFO  | Setting property uuid_validity: none 2025-09-29 06:37:25.118208 | orchestrator | 2025-09-29 06:37:04 | INFO  | Setting property provided_until: none 2025-09-29 06:37:25.118232 | orchestrator | 2025-09-29 06:37:04 | INFO  | Setting property image_description: Cirros 2025-09-29 06:37:25.118250 | orchestrator | 2025-09-29 06:37:04 | INFO  | Setting property image_name: Cirros 2025-09-29 06:37:25.118262 | orchestrator | 2025-09-29 06:37:05 | INFO  | Setting property internal_version: 0.6.2 2025-09-29 06:37:25.118275 | orchestrator | 2025-09-29 06:37:05 | INFO  | Setting property image_original_user: cirros 2025-09-29 06:37:25.118288 | orchestrator | 2025-09-29 06:37:05 | INFO  | Setting property os_version: 0.6.2 2025-09-29 06:37:25.118301 | orchestrator | 2025-09-29 06:37:05 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-29 06:37:25.118315 | orchestrator | 2025-09-29 06:37:06 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-29 06:37:25.118328 | orchestrator | 2025-09-29 06:37:06 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-29 06:37:25.118340 | orchestrator | 2025-09-29 06:37:06 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-29 06:37:25.118353 | orchestrator | 2025-09-29 06:37:06 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-29 06:37:25.118366 | orchestrator | 2025-09-29 06:37:06 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-29 06:37:25.118379 | orchestrator | 2025-09-29 06:37:06 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-29 06:37:25.118390 | orchestrator | 2025-09-29 06:37:06 | INFO  | Importing image Cirros 0.6.3 2025-09-29 06:37:25.118400 | orchestrator | 2025-09-29 06:37:06 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-29 06:37:25.118411 | orchestrator | 2025-09-29 06:37:07 | INFO  | Waiting for image to leave queued state... 2025-09-29 06:37:25.118422 | orchestrator | 2025-09-29 06:37:09 | INFO  | Waiting for import to complete... 2025-09-29 06:37:25.118454 | orchestrator | 2025-09-29 06:37:19 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-29 06:37:25.118466 | orchestrator | 2025-09-29 06:37:20 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-29 06:37:25.118477 | orchestrator | 2025-09-29 06:37:20 | INFO  | Setting internal_version = 0.6.3 2025-09-29 06:37:25.118488 | orchestrator | 2025-09-29 06:37:20 | INFO  | Setting image_original_user = cirros 2025-09-29 06:37:25.118498 | orchestrator | 2025-09-29 06:37:20 | INFO  | Adding tag os:cirros 2025-09-29 06:37:25.118509 | orchestrator | 2025-09-29 06:37:20 | INFO  | Setting property architecture: x86_64 2025-09-29 06:37:25.118520 | orchestrator | 2025-09-29 06:37:20 | INFO  | Setting property hw_disk_bus: scsi 2025-09-29 06:37:25.118530 | orchestrator | 2025-09-29 06:37:20 | INFO  | Setting property hw_rng_model: virtio 2025-09-29 06:37:25.118541 | orchestrator | 2025-09-29 06:37:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-29 06:37:25.118552 | orchestrator | 2025-09-29 06:37:21 | INFO  | Setting property hw_watchdog_action: reset 2025-09-29 06:37:25.118563 | orchestrator | 2025-09-29 06:37:21 | INFO  | Setting property hypervisor_type: qemu 2025-09-29 06:37:25.118574 | orchestrator | 2025-09-29 06:37:21 | INFO  | Setting property os_distro: cirros 2025-09-29 06:37:25.118593 | orchestrator | 2025-09-29 06:37:22 | INFO  | Setting property os_purpose: minimal 2025-09-29 06:37:25.118604 | orchestrator | 2025-09-29 06:37:22 | INFO  | Setting property replace_frequency: never 2025-09-29 06:37:25.118615 | orchestrator | 2025-09-29 06:37:22 | INFO  | Setting property uuid_validity: none 2025-09-29 06:37:25.118626 | orchestrator | 2025-09-29 06:37:22 | INFO  | Setting property provided_until: none 2025-09-29 06:37:25.118636 | orchestrator | 2025-09-29 06:37:22 | INFO  | Setting property image_description: Cirros 2025-09-29 06:37:25.118647 | orchestrator | 2025-09-29 06:37:23 | INFO  | Setting property image_name: Cirros 2025-09-29 06:37:25.118658 | orchestrator | 2025-09-29 06:37:23 | INFO  | Setting property internal_version: 0.6.3 2025-09-29 06:37:25.118668 | orchestrator | 2025-09-29 06:37:23 | INFO  | Setting property image_original_user: cirros 2025-09-29 06:37:25.118679 | orchestrator | 2025-09-29 06:37:23 | INFO  | Setting property os_version: 0.6.3 2025-09-29 06:37:25.118690 | orchestrator | 2025-09-29 06:37:23 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-29 06:37:25.118700 | orchestrator | 2025-09-29 06:37:24 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-29 06:37:25.118717 | orchestrator | 2025-09-29 06:37:24 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-29 06:37:25.118728 | orchestrator | 2025-09-29 06:37:24 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-29 06:37:25.118739 | orchestrator | 2025-09-29 06:37:24 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-29 06:37:25.315662 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-29 06:37:27.149838 | orchestrator | 2025-09-29 06:37:27 | INFO  | date: 2025-09-29 2025-09-29 06:37:27.149963 | orchestrator | 2025-09-29 06:37:27 | INFO  | image: octavia-amphora-haproxy-2024.2.20250929.qcow2 2025-09-29 06:37:27.149994 | orchestrator | 2025-09-29 06:37:27 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250929.qcow2 2025-09-29 06:37:27.150377 | orchestrator | 2025-09-29 06:37:27 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250929.qcow2.CHECKSUM 2025-09-29 06:37:27.218420 | orchestrator | 2025-09-29 06:37:27 | INFO  | checksum: 17d696a49458a60e4f1468c7afd643fed1b8d34e997714d84150d5f97f2dd71a 2025-09-29 06:37:27.293870 | orchestrator | 2025-09-29 06:37:27 | INFO  | It takes a moment until task 146a93fd-76bd-4714-8b90-70a6a2ffddf8 (image-manager) has been started and output is visible here. 2025-09-29 06:38:29.011983 | orchestrator | 2025-09-29 06:37:29 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-29' 2025-09-29 06:38:29.012131 | orchestrator | 2025-09-29 06:37:29 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250929.qcow2: 200 2025-09-29 06:38:29.012157 | orchestrator | 2025-09-29 06:37:29 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-29 2025-09-29 06:38:29.012170 | orchestrator | 2025-09-29 06:37:29 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250929.qcow2 2025-09-29 06:38:29.012183 | orchestrator | 2025-09-29 06:37:30 | INFO  | Waiting for image to leave queued state... 2025-09-29 06:38:29.012195 | orchestrator | 2025-09-29 06:37:32 | INFO  | Waiting for import to complete... 2025-09-29 06:38:29.012232 | orchestrator | 2025-09-29 06:37:43 | INFO  | Waiting for import to complete... 2025-09-29 06:38:29.012252 | orchestrator | 2025-09-29 06:37:53 | INFO  | Waiting for import to complete... 2025-09-29 06:38:29.012269 | orchestrator | 2025-09-29 06:38:03 | INFO  | Waiting for import to complete... 2025-09-29 06:38:29.012289 | orchestrator | 2025-09-29 06:38:13 | INFO  | Waiting for import to complete... 2025-09-29 06:38:29.012307 | orchestrator | 2025-09-29 06:38:23 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-29' successfully completed, reloading images 2025-09-29 06:38:29.012327 | orchestrator | 2025-09-29 06:38:23 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-29' 2025-09-29 06:38:29.012342 | orchestrator | 2025-09-29 06:38:23 | INFO  | Setting internal_version = 2025-09-29 2025-09-29 06:38:29.012353 | orchestrator | 2025-09-29 06:38:23 | INFO  | Setting image_original_user = ubuntu 2025-09-29 06:38:29.012364 | orchestrator | 2025-09-29 06:38:23 | INFO  | Adding tag amphora 2025-09-29 06:38:29.012375 | orchestrator | 2025-09-29 06:38:24 | INFO  | Adding tag os:ubuntu 2025-09-29 06:38:29.012386 | orchestrator | 2025-09-29 06:38:24 | INFO  | Setting property architecture: x86_64 2025-09-29 06:38:29.012398 | orchestrator | 2025-09-29 06:38:24 | INFO  | Setting property hw_disk_bus: scsi 2025-09-29 06:38:29.012408 | orchestrator | 2025-09-29 06:38:24 | INFO  | Setting property hw_rng_model: virtio 2025-09-29 06:38:29.012419 | orchestrator | 2025-09-29 06:38:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-29 06:38:29.012445 | orchestrator | 2025-09-29 06:38:25 | INFO  | Setting property hw_watchdog_action: reset 2025-09-29 06:38:29.012456 | orchestrator | 2025-09-29 06:38:25 | INFO  | Setting property hypervisor_type: qemu 2025-09-29 06:38:29.012467 | orchestrator | 2025-09-29 06:38:26 | INFO  | Setting property os_distro: ubuntu 2025-09-29 06:38:29.012478 | orchestrator | 2025-09-29 06:38:26 | INFO  | Setting property replace_frequency: quarterly 2025-09-29 06:38:29.012489 | orchestrator | 2025-09-29 06:38:26 | INFO  | Setting property uuid_validity: last-1 2025-09-29 06:38:29.012500 | orchestrator | 2025-09-29 06:38:26 | INFO  | Setting property provided_until: none 2025-09-29 06:38:29.012510 | orchestrator | 2025-09-29 06:38:26 | INFO  | Setting property os_purpose: network 2025-09-29 06:38:29.012521 | orchestrator | 2025-09-29 06:38:27 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-29 06:38:29.012535 | orchestrator | 2025-09-29 06:38:27 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-29 06:38:29.012549 | orchestrator | 2025-09-29 06:38:27 | INFO  | Setting property internal_version: 2025-09-29 2025-09-29 06:38:29.012561 | orchestrator | 2025-09-29 06:38:27 | INFO  | Setting property image_original_user: ubuntu 2025-09-29 06:38:29.012574 | orchestrator | 2025-09-29 06:38:27 | INFO  | Setting property os_version: 2025-09-29 2025-09-29 06:38:29.012587 | orchestrator | 2025-09-29 06:38:28 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250929.qcow2 2025-09-29 06:38:29.012601 | orchestrator | 2025-09-29 06:38:28 | INFO  | Setting property image_build_date: 2025-09-29 2025-09-29 06:38:29.012615 | orchestrator | 2025-09-29 06:38:28 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-29' 2025-09-29 06:38:29.012628 | orchestrator | 2025-09-29 06:38:28 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-29' 2025-09-29 06:38:29.012667 | orchestrator | 2025-09-29 06:38:28 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-29 06:38:29.012682 | orchestrator | 2025-09-29 06:38:28 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-29 06:38:29.012696 | orchestrator | 2025-09-29 06:38:28 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-29 06:38:29.012709 | orchestrator | 2025-09-29 06:38:28 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-29 06:38:29.499828 | orchestrator | ok: Runtime: 0:03:13.164411 2025-09-29 06:38:29.561484 | 2025-09-29 06:38:29.561607 | TASK [Run checks] 2025-09-29 06:38:30.249606 | orchestrator | + set -e 2025-09-29 06:38:30.249819 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 06:38:30.249842 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 06:38:30.249864 | orchestrator | ++ INTERACTIVE=false 2025-09-29 06:38:30.249878 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 06:38:30.249890 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 06:38:30.249904 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-29 06:38:30.250545 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-29 06:38:30.256951 | orchestrator | 2025-09-29 06:38:30.257048 | orchestrator | # CHECK 2025-09-29 06:38:30.257063 | orchestrator | 2025-09-29 06:38:30.257076 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:38:30.257126 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:38:30.257141 | orchestrator | + echo 2025-09-29 06:38:30.257152 | orchestrator | + echo '# CHECK' 2025-09-29 06:38:30.257163 | orchestrator | + echo 2025-09-29 06:38:30.257178 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-29 06:38:30.257564 | orchestrator | ++ semver latest 5.0.0 2025-09-29 06:38:30.320669 | orchestrator | 2025-09-29 06:38:30.320763 | orchestrator | ## Containers @ testbed-manager 2025-09-29 06:38:30.320774 | orchestrator | 2025-09-29 06:38:30.320783 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-29 06:38:30.320790 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 06:38:30.320796 | orchestrator | + echo 2025-09-29 06:38:30.320803 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-29 06:38:30.320810 | orchestrator | + echo 2025-09-29 06:38:30.320816 | orchestrator | + osism container testbed-manager ps 2025-09-29 06:38:32.562477 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-29 06:38:32.563569 | orchestrator | 55c203bf1935 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-09-29 06:38:32.563617 | orchestrator | 27b1f7dc9f91 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-29 06:38:32.563656 | orchestrator | 99afad3d4f90 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-29 06:38:32.563675 | orchestrator | 26ed8d83d910 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-29 06:38:32.563691 | orchestrator | d63c14cb71ee phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-09-29 06:38:32.563717 | orchestrator | 809b95916f4e registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-09-29 06:38:32.563747 | orchestrator | f4cf2faf4892 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 32 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-09-29 06:38:32.563767 | orchestrator | 39591b68c94b registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-29 06:38:32.563787 | orchestrator | cbc5855776a5 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-09-29 06:38:32.563842 | orchestrator | ac08cb9dbc57 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-09-29 06:38:32.563863 | orchestrator | 7bb1630a80cc registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-09-29 06:38:32.563882 | orchestrator | 42610ffca290 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-ansible 2025-09-29 06:38:32.563902 | orchestrator | 4d5102d02f08 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-09-29 06:38:32.563916 | orchestrator | 2e9e01fff808 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 57 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-29 06:38:32.563927 | orchestrator | 344dca4cc2b5 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-09-29 06:38:32.563938 | orchestrator | 5d6bbad7202c registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-09-29 06:38:32.563981 | orchestrator | 5323f6a9f5ce registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-09-29 06:38:32.563994 | orchestrator | f3eae9c44909 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-09-29 06:38:32.564005 | orchestrator | 4013e3ec8b84 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-09-29 06:38:32.564016 | orchestrator | 5f0f5cd56612 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-29 06:38:32.564028 | orchestrator | 8112ad85fdbf registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 38 minutes (healthy) osismclient 2025-09-29 06:38:32.564039 | orchestrator | f8e684e2ce72 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 57 minutes ago Up 38 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-29 06:38:32.564050 | orchestrator | 6e10d60f8edb registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 57 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-29 06:38:32.564062 | orchestrator | 7face2846243 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-29 06:38:32.839088 | orchestrator | 2025-09-29 06:38:32.839213 | orchestrator | ## Images @ testbed-manager 2025-09-29 06:38:32.839229 | orchestrator | 2025-09-29 06:38:32.839240 | orchestrator | + echo 2025-09-29 06:38:32.839251 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-29 06:38:32.839261 | orchestrator | + echo 2025-09-29 06:38:32.839271 | orchestrator | + osism container testbed-manager images 2025-09-29 06:38:35.043448 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-29 06:38:35.043566 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 cc22494c7bd4 3 hours ago 243MB 2025-09-29 06:38:35.043583 | orchestrator | registry.osism.tech/osism/cephclient reef cf1ca6dcddca 3 hours ago 453MB 2025-09-29 06:38:35.043595 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7bbb3719b18d 5 hours ago 283MB 2025-09-29 06:38:35.043610 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4c919ded5455 5 hours ago 686MB 2025-09-29 06:38:35.043625 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 93a6e45d44d5 5 hours ago 597MB 2025-09-29 06:38:35.043636 | orchestrator | registry.osism.tech/osism/osism-ansible latest 481c66f322a7 6 hours ago 595MB 2025-09-29 06:38:35.043647 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 af3d0cf4ec52 6 hours ago 591MB 2025-09-29 06:38:35.043658 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 39ca8aeff4c3 6 hours ago 544MB 2025-09-29 06:38:35.043669 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 98feec33c239 7 hours ago 1.23GB 2025-09-29 06:38:35.043680 | orchestrator | registry.osism.tech/osism/osism latest ad23a03a0c3c 7 hours ago 326MB 2025-09-29 06:38:35.043691 | orchestrator | registry.osism.tech/osism/osism-frontend latest 10ce69caac34 7 hours ago 238MB 2025-09-29 06:38:35.043703 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 936abd4c7435 7 hours ago 315MB 2025-09-29 06:38:35.043734 | orchestrator | registry.osism.tech/osism/homer v25.08.1 849a6c620511 2 days ago 11.5MB 2025-09-29 06:38:35.043746 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 4 weeks ago 275MB 2025-09-29 06:38:35.043757 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 7 weeks ago 329MB 2025-09-29 06:38:35.043768 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 2 months ago 226MB 2025-09-29 06:38:35.043779 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-29 06:38:35.043790 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 8 months ago 571MB 2025-09-29 06:38:35.043801 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-29 06:38:35.374906 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-29 06:38:35.375384 | orchestrator | ++ semver latest 5.0.0 2025-09-29 06:38:35.421222 | orchestrator | 2025-09-29 06:38:35.421328 | orchestrator | ## Containers @ testbed-node-0 2025-09-29 06:38:35.421344 | orchestrator | 2025-09-29 06:38:35.421357 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-29 06:38:35.421369 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 06:38:35.421380 | orchestrator | + echo 2025-09-29 06:38:35.421392 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-29 06:38:35.421404 | orchestrator | + echo 2025-09-29 06:38:35.421415 | orchestrator | + osism container testbed-node-0 ps 2025-09-29 06:38:37.777773 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-29 06:38:37.778522 | orchestrator | 785846a04c2c registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-29 06:38:37.778546 | orchestrator | 12bd378b4b76 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-29 06:38:37.778575 | orchestrator | 79a048c2c90a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-29 06:38:37.778581 | orchestrator | b682228c2270 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-29 06:38:37.778587 | orchestrator | a6f2fd851506 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-29 06:38:37.778592 | orchestrator | 9f297047cf4d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-29 06:38:37.778599 | orchestrator | fb25de6b6af8 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes grafana 2025-09-29 06:38:37.778604 | orchestrator | b0f4826cb6f8 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) cinder_api 2025-09-29 06:38:37.778609 | orchestrator | bf5a27dc58e4 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-09-29 06:38:37.778614 | orchestrator | 160db88c9a79 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-29 06:38:37.778620 | orchestrator | ad606e850b0c registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-29 06:38:37.778625 | orchestrator | 285f20137110 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-29 06:38:37.778631 | orchestrator | f103db28b959 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-29 06:38:37.778637 | orchestrator | 0cd2a5104ea4 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-29 06:38:37.778651 | orchestrator | ff2a181c29ae registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-29 06:38:37.778657 | orchestrator | 569b4d8be461 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-29 06:38:37.778663 | orchestrator | 26aa6bc0ad15 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-29 06:38:37.778668 | orchestrator | 08021fcd50e9 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-29 06:38:37.778673 | orchestrator | e851f871aa7e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-29 06:38:37.778679 | orchestrator | 77a31c9c0500 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-29 06:38:37.778684 | orchestrator | d667c8eb6d1e registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-09-29 06:38:37.778703 | orchestrator | dfce8b886a32 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-29 06:38:37.778713 | orchestrator | b482c2895e9c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-09-29 06:38:37.778719 | orchestrator | eeb238b7eaad registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-29 06:38:37.778724 | orchestrator | 6f8a18917905 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-29 06:38:37.778730 | orchestrator | 2afaf8da5e5e registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-29 06:38:37.778735 | orchestrator | 7e728a49c96d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-09-29 06:38:37.778740 | orchestrator | a828ca2d9c9c registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-29 06:38:37.778746 | orchestrator | 56e118970fd0 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-29 06:38:37.778751 | orchestrator | 2906582496f8 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-29 06:38:37.778756 | orchestrator | c0bda6f2083f registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) horizon 2025-09-29 06:38:37.778761 | orchestrator | e38584a671a4 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-29 06:38:37.778777 | orchestrator | 9cf513b1251a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-09-29 06:38:37.778785 | orchestrator | 1efa2f27bd50 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-09-29 06:38:37.778796 | orchestrator | aafd7334c977 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-09-29 06:38:37.778805 | orchestrator | 092acd7aa94a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-29 06:38:37.778817 | orchestrator | 7a2ba4807192 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-29 06:38:37.778825 | orchestrator | d0f83294609e registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-29 06:38:37.778834 | orchestrator | 5641f13469ce registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-29 06:38:37.778842 | orchestrator | 085f336a3bd2 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-09-29 06:38:37.778850 | orchestrator | bb9f570c9756 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-09-29 06:38:37.778863 | orchestrator | 36a6b07538c8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-09-29 06:38:37.778871 | orchestrator | 2ad65b52194f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-29 06:38:37.778879 | orchestrator | 718feb21aab3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-09-29 06:38:37.778898 | orchestrator | 33dd09f6f0d1 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-29 06:38:37.778905 | orchestrator | 1cb6a78e0e10 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-29 06:38:37.778912 | orchestrator | 56f1cb33e0c1 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-29 06:38:37.778919 | orchestrator | 1d0b7b14a1b5 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-29 06:38:37.778926 | orchestrator | 0c8a83ae40a9 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) memcached 2025-09-29 06:38:37.778933 | orchestrator | fc272fa503f8 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-29 06:38:37.778940 | orchestrator | 8be24fa9beae registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-29 06:38:37.778946 | orchestrator | 5cd431b77e66 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-29 06:38:38.103771 | orchestrator | 2025-09-29 06:38:38.103907 | orchestrator | ## Images @ testbed-node-0 2025-09-29 06:38:38.103933 | orchestrator | 2025-09-29 06:38:38.103953 | orchestrator | + echo 2025-09-29 06:38:38.103972 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-29 06:38:38.103993 | orchestrator | + echo 2025-09-29 06:38:38.104014 | orchestrator | + osism container testbed-node-0 images 2025-09-29 06:38:40.574496 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-29 06:38:40.574609 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 741982dbec6f 3 hours ago 1.27GB 2025-09-29 06:38:40.574626 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d63382ec1034 5 hours ago 292MB 2025-09-29 06:38:40.574638 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 cc6de98271b9 5 hours ago 383MB 2025-09-29 06:38:40.574650 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7bbb3719b18d 5 hours ago 283MB 2025-09-29 06:38:40.574661 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 62d71a81d22a 5 hours ago 1.53GB 2025-09-29 06:38:40.574671 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f2cdbadb5f0d 5 hours ago 1.55GB 2025-09-29 06:38:40.574704 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1187998f243a 5 hours ago 340MB 2025-09-29 06:38:40.574716 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4c919ded5455 5 hours ago 686MB 2025-09-29 06:38:40.574727 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 29d37f287821 5 hours ago 1.02GB 2025-09-29 06:38:40.574738 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 01bf12110f22 5 hours ago 294MB 2025-09-29 06:38:40.574771 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 93a6e45d44d5 5 hours ago 597MB 2025-09-29 06:38:40.574782 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 0b39a1d58938 5 hours ago 284MB 2025-09-29 06:38:40.574793 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 89534f132a5b 5 hours ago 300MB 2025-09-29 06:38:40.574804 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 6bd796ff4175 5 hours ago 300MB 2025-09-29 06:38:40.574815 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 fb17240a46d3 5 hours ago 465MB 2025-09-29 06:38:40.574826 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7e2f8a4c40e0 5 hours ago 291MB 2025-09-29 06:38:40.574837 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fddff0214cd2 5 hours ago 291MB 2025-09-29 06:38:40.574855 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9a0ab24193b1 5 hours ago 1.16GB 2025-09-29 06:38:40.574878 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d5635479d056 5 hours ago 319MB 2025-09-29 06:38:40.574902 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f4c1e352efec 5 hours ago 316MB 2025-09-29 06:38:40.574919 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 14da0e895ff1 5 hours ago 310MB 2025-09-29 06:38:40.574935 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 91ffc3618f4a 5 hours ago 375MB 2025-09-29 06:38:40.574951 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7cdfbd1f4840 5 hours ago 323MB 2025-09-29 06:38:40.574969 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 951b973d5f5d 5 hours ago 307MB 2025-09-29 06:38:40.574987 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 61dce529f4a7 5 hours ago 307MB 2025-09-29 06:38:40.575004 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 445fc581f7f9 5 hours ago 307MB 2025-09-29 06:38:40.575020 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6f34dd9357c0 5 hours ago 307MB 2025-09-29 06:38:40.575037 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 85a924aee7ec 5 hours ago 1.07GB 2025-09-29 06:38:40.575062 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 1dcdb9f11be3 5 hours ago 1.07GB 2025-09-29 06:38:40.575122 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 cb0bad7f0b11 5 hours ago 1.05GB 2025-09-29 06:38:40.575142 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 8c8a891561ef 5 hours ago 1.05GB 2025-09-29 06:38:40.575161 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 9dafec748c81 5 hours ago 1.05GB 2025-09-29 06:38:40.575179 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 0a2846ef207c 5 hours ago 1.01GB 2025-09-29 06:38:40.575198 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 756a2c5bd1c5 5 hours ago 1.07GB 2025-09-29 06:38:40.575217 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f012752088ed 5 hours ago 1.1GB 2025-09-29 06:38:40.575236 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 3c71339fdb0f 5 hours ago 1.06GB 2025-09-29 06:38:40.575254 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 d47706deb800 5 hours ago 1.06GB 2025-09-29 06:38:40.575290 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf3ed0aa60d2 5 hours ago 1.12GB 2025-09-29 06:38:40.575301 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 044f38dc6633 5 hours ago 1.18GB 2025-09-29 06:38:40.575325 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 09de8228be3b 5 hours ago 1.42GB 2025-09-29 06:38:40.575348 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0fefc914c23b 5 hours ago 1.42GB 2025-09-29 06:38:40.575365 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b61fa9fd6092 5 hours ago 1.22GB 2025-09-29 06:38:40.575392 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 921c4c791441 5 hours ago 1.22GB 2025-09-29 06:38:40.575415 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7cf146e96644 5 hours ago 1.22GB 2025-09-29 06:38:40.575433 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c3639be31f27 5 hours ago 1.38GB 2025-09-29 06:38:40.575454 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 0028e981a4a4 5 hours ago 1.01GB 2025-09-29 06:38:40.575471 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a53f0ece27bd 5 hours ago 1.01GB 2025-09-29 06:38:40.575484 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 97a9fb1361f5 5 hours ago 1.01GB 2025-09-29 06:38:40.575504 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa7c4eec338f 5 hours ago 994MB 2025-09-29 06:38:40.575523 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8cfbef16ecb8 5 hours ago 1.26GB 2025-09-29 06:38:40.575540 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 40c3e4b4bb2f 5 hours ago 1.15GB 2025-09-29 06:38:40.575558 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 4eb61c072cd2 5 hours ago 992MB 2025-09-29 06:38:40.575575 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 6ac0269aaf6a 5 hours ago 992MB 2025-09-29 06:38:40.575595 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 48d406e26696 5 hours ago 992MB 2025-09-29 06:38:40.575614 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 64084f9c5bf3 5 hours ago 992MB 2025-09-29 06:38:40.575641 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 77f37ebaac7e 5 hours ago 1GB 2025-09-29 06:38:40.575653 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 163e26f1e0c9 5 hours ago 1.01GB 2025-09-29 06:38:40.575663 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bb26b81db01 5 hours ago 1.01GB 2025-09-29 06:38:40.575674 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f40952e206c7 5 hours ago 1GB 2025-09-29 06:38:40.575685 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 2d29edd8823b 5 hours ago 1GB 2025-09-29 06:38:40.575696 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 80971f8378d6 5 hours ago 1GB 2025-09-29 06:38:40.575706 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 b6650a5be179 5 hours ago 994MB 2025-09-29 06:38:40.575717 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 1a86f18e6cb4 5 hours ago 995MB 2025-09-29 06:38:40.854860 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-29 06:38:40.855612 | orchestrator | ++ semver latest 5.0.0 2025-09-29 06:38:40.915314 | orchestrator | 2025-09-29 06:38:40.915388 | orchestrator | ## Containers @ testbed-node-1 2025-09-29 06:38:40.915397 | orchestrator | 2025-09-29 06:38:40.915404 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-29 06:38:40.915410 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 06:38:40.915416 | orchestrator | + echo 2025-09-29 06:38:40.915423 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-29 06:38:40.915430 | orchestrator | + echo 2025-09-29 06:38:40.915436 | orchestrator | + osism container testbed-node-1 ps 2025-09-29 06:38:43.284175 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-29 06:38:43.284265 | orchestrator | b4d04163d6dc registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-29 06:38:43.284295 | orchestrator | 3568929b9595 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-29 06:38:43.284302 | orchestrator | 4d43aa97bc30 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-29 06:38:43.284309 | orchestrator | cc4fd7f120ed registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-29 06:38:43.284316 | orchestrator | c3cfec38769f registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-09-29 06:38:43.284322 | orchestrator | 5e706b3ccf9a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-29 06:38:43.284329 | orchestrator | c906acde3166 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-29 06:38:43.284995 | orchestrator | 0b1142734370 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-29 06:38:43.285006 | orchestrator | 5a1902b3e805 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-09-29 06:38:43.285014 | orchestrator | 6445cc8fcffc registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-29 06:38:43.285035 | orchestrator | ac74f6068534 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-29 06:38:43.285043 | orchestrator | 32ab00808d04 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-29 06:38:43.285050 | orchestrator | 72ebf8539b74 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-29 06:38:43.285057 | orchestrator | 626a39d74a50 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-29 06:38:43.285065 | orchestrator | 5a595f4cc382 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-29 06:38:43.285072 | orchestrator | 55c3407cc7ce registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-29 06:38:43.285078 | orchestrator | 4d9fb64e81c0 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-29 06:38:43.285105 | orchestrator | 91d4ef3e982c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-29 06:38:43.285112 | orchestrator | f3e1e92f45be registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-29 06:38:43.285118 | orchestrator | 20f43ad32c7c registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-29 06:38:43.285125 | orchestrator | d40890fec335 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-09-29 06:38:43.285151 | orchestrator | 8d0b866141ae registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-09-29 06:38:43.285158 | orchestrator | 08c94647c6a8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-09-29 06:38:43.285164 | orchestrator | 974f4c3f216e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-29 06:38:43.285170 | orchestrator | 338acd641e3a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-29 06:38:43.285176 | orchestrator | 18ea47a7ab5b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-29 06:38:43.285182 | orchestrator | 29239b2ec300 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-09-29 06:38:43.285189 | orchestrator | 72c59cd02b9f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-29 06:38:43.285199 | orchestrator | bdae2c316e2a registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-29 06:38:43.285205 | orchestrator | 73ee36454535 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-29 06:38:43.285215 | orchestrator | 6b26ddf2232d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-29 06:38:43.285222 | orchestrator | 68706099e373 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-29 06:38:43.285228 | orchestrator | 4b73632d7167 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-09-29 06:38:43.285235 | orchestrator | 7fd6d92b43c5 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-29 06:38:43.285241 | orchestrator | 3dee6119cc0e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-09-29 06:38:43.285248 | orchestrator | 5b936323d6b2 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-29 06:38:43.285254 | orchestrator | c813097a4e69 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-29 06:38:43.285260 | orchestrator | 8b123daa12f6 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-29 06:38:43.285266 | orchestrator | 8c1aaa1f01e9 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-29 06:38:43.285272 | orchestrator | 7ea336f59fc1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-09-29 06:38:43.285283 | orchestrator | 2454058041e0 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-09-29 06:38:43.285289 | orchestrator | 7b3ab9d59623 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-29 06:38:43.285296 | orchestrator | 0341792094c2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-09-29 06:38:43.285302 | orchestrator | aed13990054b registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-29 06:38:43.285314 | orchestrator | f8a3582fc515 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-29 06:38:43.285320 | orchestrator | 044c6cb2e66f registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-29 06:38:43.285326 | orchestrator | 4f8978e862c0 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-29 06:38:43.285333 | orchestrator | 1f393bdd667a registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) redis 2025-09-29 06:38:43.285339 | orchestrator | 01914b7ab9ca registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-09-29 06:38:43.285345 | orchestrator | 2dc070fb6ceb registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-29 06:38:43.285351 | orchestrator | 4686de766499 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-29 06:38:43.285358 | orchestrator | 9bad05e494e9 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-29 06:38:43.579883 | orchestrator | 2025-09-29 06:38:43.579992 | orchestrator | ## Images @ testbed-node-1 2025-09-29 06:38:43.580008 | orchestrator | 2025-09-29 06:38:43.580019 | orchestrator | + echo 2025-09-29 06:38:43.580030 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-29 06:38:43.580041 | orchestrator | + echo 2025-09-29 06:38:43.580051 | orchestrator | + osism container testbed-node-1 images 2025-09-29 06:38:45.945806 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-29 06:38:45.945907 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 741982dbec6f 3 hours ago 1.27GB 2025-09-29 06:38:45.945921 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d63382ec1034 5 hours ago 292MB 2025-09-29 06:38:45.945932 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 cc6de98271b9 5 hours ago 383MB 2025-09-29 06:38:45.945942 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7bbb3719b18d 5 hours ago 283MB 2025-09-29 06:38:45.945952 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 62d71a81d22a 5 hours ago 1.53GB 2025-09-29 06:38:45.945962 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f2cdbadb5f0d 5 hours ago 1.55GB 2025-09-29 06:38:45.945972 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1187998f243a 5 hours ago 340MB 2025-09-29 06:38:45.945981 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4c919ded5455 5 hours ago 686MB 2025-09-29 06:38:45.945991 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 29d37f287821 5 hours ago 1.02GB 2025-09-29 06:38:45.946140 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 01bf12110f22 5 hours ago 294MB 2025-09-29 06:38:45.946155 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 93a6e45d44d5 5 hours ago 597MB 2025-09-29 06:38:45.946165 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 0b39a1d58938 5 hours ago 284MB 2025-09-29 06:38:45.946175 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 89534f132a5b 5 hours ago 300MB 2025-09-29 06:38:45.946184 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 6bd796ff4175 5 hours ago 300MB 2025-09-29 06:38:45.946194 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 fb17240a46d3 5 hours ago 465MB 2025-09-29 06:38:45.946203 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7e2f8a4c40e0 5 hours ago 291MB 2025-09-29 06:38:45.946213 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fddff0214cd2 5 hours ago 291MB 2025-09-29 06:38:45.946222 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9a0ab24193b1 5 hours ago 1.16GB 2025-09-29 06:38:45.946232 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d5635479d056 5 hours ago 319MB 2025-09-29 06:38:45.946241 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f4c1e352efec 5 hours ago 316MB 2025-09-29 06:38:45.946251 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 14da0e895ff1 5 hours ago 310MB 2025-09-29 06:38:45.946261 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 91ffc3618f4a 5 hours ago 375MB 2025-09-29 06:38:45.946270 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7cdfbd1f4840 5 hours ago 323MB 2025-09-29 06:38:45.946280 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 951b973d5f5d 5 hours ago 307MB 2025-09-29 06:38:45.946290 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 61dce529f4a7 5 hours ago 307MB 2025-09-29 06:38:45.946300 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 445fc581f7f9 5 hours ago 307MB 2025-09-29 06:38:45.946310 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6f34dd9357c0 5 hours ago 307MB 2025-09-29 06:38:45.946319 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f012752088ed 5 hours ago 1.1GB 2025-09-29 06:38:45.946328 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 3c71339fdb0f 5 hours ago 1.06GB 2025-09-29 06:38:45.946338 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 d47706deb800 5 hours ago 1.06GB 2025-09-29 06:38:45.946347 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf3ed0aa60d2 5 hours ago 1.12GB 2025-09-29 06:38:45.946357 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 044f38dc6633 5 hours ago 1.18GB 2025-09-29 06:38:45.946366 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 09de8228be3b 5 hours ago 1.42GB 2025-09-29 06:38:45.946378 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0fefc914c23b 5 hours ago 1.42GB 2025-09-29 06:38:45.946388 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b61fa9fd6092 5 hours ago 1.22GB 2025-09-29 06:38:45.946399 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 921c4c791441 5 hours ago 1.22GB 2025-09-29 06:38:45.946410 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7cf146e96644 5 hours ago 1.22GB 2025-09-29 06:38:45.946438 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c3639be31f27 5 hours ago 1.38GB 2025-09-29 06:38:45.946449 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 0028e981a4a4 5 hours ago 1.01GB 2025-09-29 06:38:45.946470 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a53f0ece27bd 5 hours ago 1.01GB 2025-09-29 06:38:45.946481 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 97a9fb1361f5 5 hours ago 1.01GB 2025-09-29 06:38:45.946492 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa7c4eec338f 5 hours ago 994MB 2025-09-29 06:38:45.946512 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8cfbef16ecb8 5 hours ago 1.26GB 2025-09-29 06:38:45.946530 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 40c3e4b4bb2f 5 hours ago 1.15GB 2025-09-29 06:38:45.946547 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 77f37ebaac7e 5 hours ago 1GB 2025-09-29 06:38:45.946565 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 163e26f1e0c9 5 hours ago 1.01GB 2025-09-29 06:38:45.946585 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bb26b81db01 5 hours ago 1.01GB 2025-09-29 06:38:45.946603 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f40952e206c7 5 hours ago 1GB 2025-09-29 06:38:45.946620 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 2d29edd8823b 5 hours ago 1GB 2025-09-29 06:38:45.946632 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 80971f8378d6 5 hours ago 1GB 2025-09-29 06:38:46.261258 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-29 06:38:46.261344 | orchestrator | ++ semver latest 5.0.0 2025-09-29 06:38:46.320737 | orchestrator | 2025-09-29 06:38:46.320816 | orchestrator | ## Containers @ testbed-node-2 2025-09-29 06:38:46.320829 | orchestrator | 2025-09-29 06:38:46.320841 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-29 06:38:46.320852 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 06:38:46.320864 | orchestrator | + echo 2025-09-29 06:38:46.320876 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-29 06:38:46.320888 | orchestrator | + echo 2025-09-29 06:38:46.320898 | orchestrator | + osism container testbed-node-2 ps 2025-09-29 06:38:48.435288 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-29 06:38:48.435477 | orchestrator | ffce2d435e4b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-29 06:38:48.435508 | orchestrator | 050184a327aa registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-29 06:38:48.436524 | orchestrator | 617d4c484f4a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-09-29 06:38:48.436590 | orchestrator | f49e371e4d5b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-29 06:38:48.436611 | orchestrator | 2254128f8f9e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-09-29 06:38:48.436630 | orchestrator | a57f15b69c26 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-09-29 06:38:48.436642 | orchestrator | d49cc9427c7f registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-29 06:38:48.436652 | orchestrator | 50c2ed7cfcec registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-29 06:38:48.436663 | orchestrator | 5d89ba4eeb6d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-29 06:38:48.436705 | orchestrator | 52db5cf0881f registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-29 06:38:48.436716 | orchestrator | 76f712601902 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-29 06:38:48.436727 | orchestrator | 7dbb31b09851 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-29 06:38:48.436738 | orchestrator | 7d1cbe26481e registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-09-29 06:38:48.436748 | orchestrator | 499004cc76c5 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-09-29 06:38:48.436759 | orchestrator | 103c2303cd28 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-09-29 06:38:48.436770 | orchestrator | b47f8dfa49b8 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-29 06:38:48.436781 | orchestrator | 72273b2e8757 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_elasticsearch_exporter 2025-09-29 06:38:48.436792 | orchestrator | 55d83d55f329 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-29 06:38:48.436803 | orchestrator | ab89d2a82e4e registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-09-29 06:38:48.436814 | orchestrator | a9cb23d8963a registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-29 06:38:48.436825 | orchestrator | a41ee0767056 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_memcached_exporter 2025-09-29 06:38:48.436881 | orchestrator | bfd46673bd8b registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-09-29 06:38:48.436894 | orchestrator | 6b597db95d15 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_mysqld_exporter 2025-09-29 06:38:48.436905 | orchestrator | 0f3b5acb8ea2 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-09-29 06:38:48.436916 | orchestrator | 62d658073e80 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes prometheus_node_exporter 2025-09-29 06:38:48.436932 | orchestrator | 41248acc08eb registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-09-29 06:38:48.436943 | orchestrator | bb482cc4d476 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-09-29 06:38:48.436954 | orchestrator | 01892381725b registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-09-29 06:38:48.436972 | orchestrator | 7a74bf676d52 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-29 06:38:48.436983 | orchestrator | 06555cbd8de1 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-09-29 06:38:48.436994 | orchestrator | 15d64b298d5e registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-09-29 06:38:48.437005 | orchestrator | b53e2f59177a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-29 06:38:48.437016 | orchestrator | 8b581d1dd2ba registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-09-29 06:38:48.437026 | orchestrator | e946108fe76a registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-29 06:38:48.437037 | orchestrator | 8a6fcc5d9190 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-09-29 06:38:48.437048 | orchestrator | 502a3049fcf0 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-09-29 06:38:48.437059 | orchestrator | f4df3d734626 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-09-29 06:38:48.437069 | orchestrator | 547acc8d4fc0 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-29 06:38:48.437105 | orchestrator | 5df9f8642453 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-09-29 06:38:48.437117 | orchestrator | 185855839fd5 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-29 06:38:48.437128 | orchestrator | 69c8eb2ff0df registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_sb_db 2025-09-29 06:38:48.437138 | orchestrator | a291bbae8762 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes ovn_nb_db 2025-09-29 06:38:48.437149 | orchestrator | 2f807e810c6d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-09-29 06:38:48.437160 | orchestrator | da78f1b2b062 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-29 06:38:48.437180 | orchestrator | 4a3e0cfc6686 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-29 06:38:48.437191 | orchestrator | c947841d1f4d registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-29 06:38:48.437202 | orchestrator | e1f56a8de141 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-09-29 06:38:48.437214 | orchestrator | 175319f7c9fd registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-09-29 06:38:48.437231 | orchestrator | 6d496db0c3c0 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-09-29 06:38:48.437243 | orchestrator | 524b3b3089b4 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-29 06:38:48.437253 | orchestrator | 18ed2f4473d3 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-29 06:38:48.437264 | orchestrator | 4c91e1eef36a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-29 06:38:48.620947 | orchestrator | + echo 2025-09-29 06:38:48.621117 | orchestrator | 2025-09-29 06:38:48.621221 | orchestrator | ## Images @ testbed-node-2 2025-09-29 06:38:48.621239 | orchestrator | 2025-09-29 06:38:48.621251 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-29 06:38:48.621263 | orchestrator | + echo 2025-09-29 06:38:48.621274 | orchestrator | + osism container testbed-node-2 images 2025-09-29 06:38:50.724959 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-29 06:38:50.725156 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 741982dbec6f 3 hours ago 1.27GB 2025-09-29 06:38:50.725201 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d63382ec1034 5 hours ago 292MB 2025-09-29 06:38:50.725215 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 cc6de98271b9 5 hours ago 383MB 2025-09-29 06:38:50.725226 | orchestrator | registry.osism.tech/kolla/cron 2024.2 7bbb3719b18d 5 hours ago 283MB 2025-09-29 06:38:50.725238 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 62d71a81d22a 5 hours ago 1.53GB 2025-09-29 06:38:50.725249 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 f2cdbadb5f0d 5 hours ago 1.55GB 2025-09-29 06:38:50.725259 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 1187998f243a 5 hours ago 340MB 2025-09-29 06:38:50.725270 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 4c919ded5455 5 hours ago 686MB 2025-09-29 06:38:50.725281 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 01bf12110f22 5 hours ago 294MB 2025-09-29 06:38:50.725291 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 29d37f287821 5 hours ago 1.02GB 2025-09-29 06:38:50.725302 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 93a6e45d44d5 5 hours ago 597MB 2025-09-29 06:38:50.725313 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 0b39a1d58938 5 hours ago 284MB 2025-09-29 06:38:50.725324 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 89534f132a5b 5 hours ago 300MB 2025-09-29 06:38:50.725334 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 6bd796ff4175 5 hours ago 300MB 2025-09-29 06:38:50.725345 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 fb17240a46d3 5 hours ago 465MB 2025-09-29 06:38:50.725356 | orchestrator | registry.osism.tech/kolla/redis 2024.2 7e2f8a4c40e0 5 hours ago 291MB 2025-09-29 06:38:50.725366 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 fddff0214cd2 5 hours ago 291MB 2025-09-29 06:38:50.725377 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9a0ab24193b1 5 hours ago 1.16GB 2025-09-29 06:38:50.725388 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d5635479d056 5 hours ago 319MB 2025-09-29 06:38:50.725399 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 f4c1e352efec 5 hours ago 316MB 2025-09-29 06:38:50.725411 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 14da0e895ff1 5 hours ago 310MB 2025-09-29 06:38:50.725444 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 91ffc3618f4a 5 hours ago 375MB 2025-09-29 06:38:50.725456 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7cdfbd1f4840 5 hours ago 323MB 2025-09-29 06:38:50.725469 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 951b973d5f5d 5 hours ago 307MB 2025-09-29 06:38:50.725481 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 61dce529f4a7 5 hours ago 307MB 2025-09-29 06:38:50.725492 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 445fc581f7f9 5 hours ago 307MB 2025-09-29 06:38:50.725504 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 6f34dd9357c0 5 hours ago 307MB 2025-09-29 06:38:50.725517 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f012752088ed 5 hours ago 1.1GB 2025-09-29 06:38:50.725528 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 3c71339fdb0f 5 hours ago 1.06GB 2025-09-29 06:38:50.725540 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 d47706deb800 5 hours ago 1.06GB 2025-09-29 06:38:50.725551 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 cf3ed0aa60d2 5 hours ago 1.12GB 2025-09-29 06:38:50.725562 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 044f38dc6633 5 hours ago 1.18GB 2025-09-29 06:38:50.725573 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 09de8228be3b 5 hours ago 1.42GB 2025-09-29 06:38:50.725583 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 0fefc914c23b 5 hours ago 1.42GB 2025-09-29 06:38:50.725594 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 b61fa9fd6092 5 hours ago 1.22GB 2025-09-29 06:38:50.725605 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 921c4c791441 5 hours ago 1.22GB 2025-09-29 06:38:50.725616 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 7cf146e96644 5 hours ago 1.22GB 2025-09-29 06:38:50.725651 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 c3639be31f27 5 hours ago 1.38GB 2025-09-29 06:38:50.725662 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 0028e981a4a4 5 hours ago 1.01GB 2025-09-29 06:38:50.725673 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 a53f0ece27bd 5 hours ago 1.01GB 2025-09-29 06:38:50.725684 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 97a9fb1361f5 5 hours ago 1.01GB 2025-09-29 06:38:50.725695 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 aa7c4eec338f 5 hours ago 994MB 2025-09-29 06:38:50.725705 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 8cfbef16ecb8 5 hours ago 1.26GB 2025-09-29 06:38:50.725716 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 40c3e4b4bb2f 5 hours ago 1.15GB 2025-09-29 06:38:50.725727 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 77f37ebaac7e 5 hours ago 1GB 2025-09-29 06:38:50.725737 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 163e26f1e0c9 5 hours ago 1.01GB 2025-09-29 06:38:50.725748 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 7bb26b81db01 5 hours ago 1.01GB 2025-09-29 06:38:50.725759 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f40952e206c7 5 hours ago 1GB 2025-09-29 06:38:50.725770 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 2d29edd8823b 5 hours ago 1GB 2025-09-29 06:38:50.725780 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 80971f8378d6 5 hours ago 1GB 2025-09-29 06:38:50.907252 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-29 06:38:50.913600 | orchestrator | + set -e 2025-09-29 06:38:50.913693 | orchestrator | + source /opt/manager-vars.sh 2025-09-29 06:38:50.914388 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-29 06:38:50.914411 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-29 06:38:50.914423 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-29 06:38:50.914434 | orchestrator | ++ CEPH_VERSION=reef 2025-09-29 06:38:50.914445 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-29 06:38:50.914458 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-29 06:38:50.914469 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:38:50.914480 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:38:50.914491 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-29 06:38:50.914502 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-29 06:38:50.914513 | orchestrator | ++ export ARA=false 2025-09-29 06:38:50.914524 | orchestrator | ++ ARA=false 2025-09-29 06:38:50.914536 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-29 06:38:50.914547 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-29 06:38:50.914557 | orchestrator | ++ export TEMPEST=false 2025-09-29 06:38:50.914568 | orchestrator | ++ TEMPEST=false 2025-09-29 06:38:50.914580 | orchestrator | ++ export IS_ZUUL=true 2025-09-29 06:38:50.914600 | orchestrator | ++ IS_ZUUL=true 2025-09-29 06:38:50.914620 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:38:50.914646 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:38:50.914670 | orchestrator | ++ export EXTERNAL_API=false 2025-09-29 06:38:50.914689 | orchestrator | ++ EXTERNAL_API=false 2025-09-29 06:38:50.914707 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-29 06:38:50.914725 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-29 06:38:50.914742 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-29 06:38:50.914758 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-29 06:38:50.914775 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-29 06:38:50.914792 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-29 06:38:50.914808 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-29 06:38:50.914826 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-29 06:38:50.921598 | orchestrator | + set -e 2025-09-29 06:38:50.921670 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 06:38:50.921693 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 06:38:50.921715 | orchestrator | ++ INTERACTIVE=false 2025-09-29 06:38:50.921734 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 06:38:50.921753 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 06:38:50.921773 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-29 06:38:50.922669 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-29 06:38:50.928121 | orchestrator | 2025-09-29 06:38:50.928187 | orchestrator | # Ceph status 2025-09-29 06:38:50.928200 | orchestrator | 2025-09-29 06:38:50.928212 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:38:50.928225 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:38:50.928236 | orchestrator | + echo 2025-09-29 06:38:50.928247 | orchestrator | + echo '# Ceph status' 2025-09-29 06:38:50.928258 | orchestrator | + echo 2025-09-29 06:38:50.928269 | orchestrator | + ceph -s 2025-09-29 06:38:51.520141 | orchestrator | cluster: 2025-09-29 06:38:51.520253 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-29 06:38:51.520270 | orchestrator | health: HEALTH_OK 2025-09-29 06:38:51.520284 | orchestrator | 2025-09-29 06:38:51.520296 | orchestrator | services: 2025-09-29 06:38:51.520308 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-09-29 06:38:51.520334 | orchestrator | mgr: testbed-node-1(active, since 16m), standbys: testbed-node-2, testbed-node-0 2025-09-29 06:38:51.520346 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-29 06:38:51.520358 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-09-29 06:38:51.520369 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-29 06:38:51.520380 | orchestrator | 2025-09-29 06:38:51.520391 | orchestrator | data: 2025-09-29 06:38:51.520402 | orchestrator | volumes: 1/1 healthy 2025-09-29 06:38:51.520413 | orchestrator | pools: 14 pools, 401 pgs 2025-09-29 06:38:51.520425 | orchestrator | objects: 523 objects, 2.2 GiB 2025-09-29 06:38:51.520435 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-29 06:38:51.520446 | orchestrator | pgs: 401 active+clean 2025-09-29 06:38:51.520457 | orchestrator | 2025-09-29 06:38:51.559635 | orchestrator | 2025-09-29 06:38:51.559730 | orchestrator | # Ceph versions 2025-09-29 06:38:51.559741 | orchestrator | 2025-09-29 06:38:51.559750 | orchestrator | + echo 2025-09-29 06:38:51.559782 | orchestrator | + echo '# Ceph versions' 2025-09-29 06:38:51.559791 | orchestrator | + echo 2025-09-29 06:38:51.559799 | orchestrator | + ceph versions 2025-09-29 06:38:52.195402 | orchestrator | { 2025-09-29 06:38:52.195511 | orchestrator | "mon": { 2025-09-29 06:38:52.195552 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-29 06:38:52.195568 | orchestrator | }, 2025-09-29 06:38:52.195582 | orchestrator | "mgr": { 2025-09-29 06:38:52.195595 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-29 06:38:52.195608 | orchestrator | }, 2025-09-29 06:38:52.195619 | orchestrator | "osd": { 2025-09-29 06:38:52.195633 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-29 06:38:52.195645 | orchestrator | }, 2025-09-29 06:38:52.195657 | orchestrator | "mds": { 2025-09-29 06:38:52.195669 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-29 06:38:52.195681 | orchestrator | }, 2025-09-29 06:38:52.195693 | orchestrator | "rgw": { 2025-09-29 06:38:52.195703 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-29 06:38:52.195713 | orchestrator | }, 2025-09-29 06:38:52.195724 | orchestrator | "overall": { 2025-09-29 06:38:52.195736 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-29 06:38:52.195747 | orchestrator | } 2025-09-29 06:38:52.195758 | orchestrator | } 2025-09-29 06:38:52.241972 | orchestrator | 2025-09-29 06:38:52.242222 | orchestrator | # Ceph OSD tree 2025-09-29 06:38:52.242251 | orchestrator | 2025-09-29 06:38:52.242272 | orchestrator | + echo 2025-09-29 06:38:52.242290 | orchestrator | + echo '# Ceph OSD tree' 2025-09-29 06:38:52.242309 | orchestrator | + echo 2025-09-29 06:38:52.242327 | orchestrator | + ceph osd df tree 2025-09-29 06:38:52.715931 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-29 06:38:52.716030 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-09-29 06:38:52.716042 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-29 06:38:52.716051 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.10 1.20 192 up osd.2 2025-09-29 06:38:52.716060 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 969 MiB 899 MiB 1 KiB 70 MiB 19 GiB 4.74 0.80 200 up osd.3 2025-09-29 06:38:52.716068 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-09-29 06:38:52.716120 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 70 MiB 19 GiB 7.39 1.25 201 up osd.0 2025-09-29 06:38:52.716130 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 908 MiB 835 MiB 1 KiB 74 MiB 19 GiB 4.44 0.75 189 up osd.5 2025-09-29 06:38:52.716138 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-29 06:38:52.716146 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.71 1.13 192 up osd.1 2025-09-29 06:38:52.716154 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 70 MiB 19 GiB 5.13 0.87 196 up osd.4 2025-09-29 06:38:52.716162 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-09-29 06:38:52.716171 | orchestrator | MIN/MAX VAR: 0.75/1.25 STDDEV: 1.18 2025-09-29 06:38:52.748610 | orchestrator | 2025-09-29 06:38:52.748704 | orchestrator | # Ceph monitor status 2025-09-29 06:38:52.748719 | orchestrator | 2025-09-29 06:38:52.748731 | orchestrator | + echo 2025-09-29 06:38:52.748743 | orchestrator | + echo '# Ceph monitor status' 2025-09-29 06:38:52.748755 | orchestrator | + echo 2025-09-29 06:38:52.748766 | orchestrator | + ceph mon stat 2025-09-29 06:38:53.261573 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 14, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-29 06:38:53.293558 | orchestrator | 2025-09-29 06:38:53.293657 | orchestrator | # Ceph quorum status 2025-09-29 06:38:53.293673 | orchestrator | 2025-09-29 06:38:53.293685 | orchestrator | + echo 2025-09-29 06:38:53.293697 | orchestrator | + echo '# Ceph quorum status' 2025-09-29 06:38:53.293708 | orchestrator | + echo 2025-09-29 06:38:53.294006 | orchestrator | + ceph quorum_status 2025-09-29 06:38:53.294110 | orchestrator | + jq 2025-09-29 06:38:53.891212 | orchestrator | { 2025-09-29 06:38:53.891311 | orchestrator | "election_epoch": 14, 2025-09-29 06:38:53.891324 | orchestrator | "quorum": [ 2025-09-29 06:38:53.891333 | orchestrator | 0, 2025-09-29 06:38:53.891342 | orchestrator | 1, 2025-09-29 06:38:53.891350 | orchestrator | 2 2025-09-29 06:38:53.891358 | orchestrator | ], 2025-09-29 06:38:53.891369 | orchestrator | "quorum_names": [ 2025-09-29 06:38:53.891382 | orchestrator | "testbed-node-0", 2025-09-29 06:38:53.891394 | orchestrator | "testbed-node-1", 2025-09-29 06:38:53.891415 | orchestrator | "testbed-node-2" 2025-09-29 06:38:53.891429 | orchestrator | ], 2025-09-29 06:38:53.891442 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-29 06:38:53.891464 | orchestrator | "quorum_age": 1705, 2025-09-29 06:38:53.891479 | orchestrator | "features": { 2025-09-29 06:38:53.891509 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-29 06:38:53.891523 | orchestrator | "quorum_mon": [ 2025-09-29 06:38:53.891547 | orchestrator | "kraken", 2025-09-29 06:38:53.891561 | orchestrator | "luminous", 2025-09-29 06:38:53.891574 | orchestrator | "mimic", 2025-09-29 06:38:53.891588 | orchestrator | "osdmap-prune", 2025-09-29 06:38:53.891602 | orchestrator | "nautilus", 2025-09-29 06:38:53.891615 | orchestrator | "octopus", 2025-09-29 06:38:53.891627 | orchestrator | "pacific", 2025-09-29 06:38:53.891635 | orchestrator | "elector-pinging", 2025-09-29 06:38:53.891643 | orchestrator | "quincy", 2025-09-29 06:38:53.891651 | orchestrator | "reef" 2025-09-29 06:38:53.891659 | orchestrator | ] 2025-09-29 06:38:53.891667 | orchestrator | }, 2025-09-29 06:38:53.891674 | orchestrator | "monmap": { 2025-09-29 06:38:53.891683 | orchestrator | "epoch": 1, 2025-09-29 06:38:53.891691 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-29 06:38:53.891700 | orchestrator | "modified": "2025-09-29T06:10:06.574323Z", 2025-09-29 06:38:53.891708 | orchestrator | "created": "2025-09-29T06:10:06.574323Z", 2025-09-29 06:38:53.891716 | orchestrator | "min_mon_release": 18, 2025-09-29 06:38:53.891724 | orchestrator | "min_mon_release_name": "reef", 2025-09-29 06:38:53.891733 | orchestrator | "election_strategy": 1, 2025-09-29 06:38:53.891742 | orchestrator | "disallowed_leaders: ": "", 2025-09-29 06:38:53.891751 | orchestrator | "stretch_mode": false, 2025-09-29 06:38:53.891759 | orchestrator | "tiebreaker_mon": "", 2025-09-29 06:38:53.891768 | orchestrator | "removed_ranks: ": "", 2025-09-29 06:38:53.891777 | orchestrator | "features": { 2025-09-29 06:38:53.891787 | orchestrator | "persistent": [ 2025-09-29 06:38:53.891795 | orchestrator | "kraken", 2025-09-29 06:38:53.891804 | orchestrator | "luminous", 2025-09-29 06:38:53.891813 | orchestrator | "mimic", 2025-09-29 06:38:53.891822 | orchestrator | "osdmap-prune", 2025-09-29 06:38:53.891831 | orchestrator | "nautilus", 2025-09-29 06:38:53.891840 | orchestrator | "octopus", 2025-09-29 06:38:53.891849 | orchestrator | "pacific", 2025-09-29 06:38:53.891858 | orchestrator | "elector-pinging", 2025-09-29 06:38:53.891866 | orchestrator | "quincy", 2025-09-29 06:38:53.891875 | orchestrator | "reef" 2025-09-29 06:38:53.891884 | orchestrator | ], 2025-09-29 06:38:53.891893 | orchestrator | "optional": [] 2025-09-29 06:38:53.891902 | orchestrator | }, 2025-09-29 06:38:53.891911 | orchestrator | "mons": [ 2025-09-29 06:38:53.891920 | orchestrator | { 2025-09-29 06:38:53.891929 | orchestrator | "rank": 0, 2025-09-29 06:38:53.891938 | orchestrator | "name": "testbed-node-0", 2025-09-29 06:38:53.891952 | orchestrator | "public_addrs": { 2025-09-29 06:38:53.891965 | orchestrator | "addrvec": [ 2025-09-29 06:38:53.891979 | orchestrator | { 2025-09-29 06:38:53.891997 | orchestrator | "type": "v2", 2025-09-29 06:38:53.892013 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-29 06:38:53.892027 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892040 | orchestrator | }, 2025-09-29 06:38:53.892056 | orchestrator | { 2025-09-29 06:38:53.892069 | orchestrator | "type": "v1", 2025-09-29 06:38:53.892113 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-29 06:38:53.892128 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892163 | orchestrator | } 2025-09-29 06:38:53.892173 | orchestrator | ] 2025-09-29 06:38:53.892181 | orchestrator | }, 2025-09-29 06:38:53.892188 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-29 06:38:53.892197 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-29 06:38:53.892205 | orchestrator | "priority": 0, 2025-09-29 06:38:53.892213 | orchestrator | "weight": 0, 2025-09-29 06:38:53.892220 | orchestrator | "crush_location": "{}" 2025-09-29 06:38:53.892228 | orchestrator | }, 2025-09-29 06:38:53.892236 | orchestrator | { 2025-09-29 06:38:53.892244 | orchestrator | "rank": 1, 2025-09-29 06:38:53.892252 | orchestrator | "name": "testbed-node-1", 2025-09-29 06:38:53.892259 | orchestrator | "public_addrs": { 2025-09-29 06:38:53.892267 | orchestrator | "addrvec": [ 2025-09-29 06:38:53.892275 | orchestrator | { 2025-09-29 06:38:53.892283 | orchestrator | "type": "v2", 2025-09-29 06:38:53.892291 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-29 06:38:53.892299 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892306 | orchestrator | }, 2025-09-29 06:38:53.892314 | orchestrator | { 2025-09-29 06:38:53.892322 | orchestrator | "type": "v1", 2025-09-29 06:38:53.892330 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-29 06:38:53.892338 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892346 | orchestrator | } 2025-09-29 06:38:53.892354 | orchestrator | ] 2025-09-29 06:38:53.892362 | orchestrator | }, 2025-09-29 06:38:53.892370 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-29 06:38:53.892378 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-29 06:38:53.892386 | orchestrator | "priority": 0, 2025-09-29 06:38:53.892393 | orchestrator | "weight": 0, 2025-09-29 06:38:53.892401 | orchestrator | "crush_location": "{}" 2025-09-29 06:38:53.892409 | orchestrator | }, 2025-09-29 06:38:53.892417 | orchestrator | { 2025-09-29 06:38:53.892425 | orchestrator | "rank": 2, 2025-09-29 06:38:53.892432 | orchestrator | "name": "testbed-node-2", 2025-09-29 06:38:53.892440 | orchestrator | "public_addrs": { 2025-09-29 06:38:53.892448 | orchestrator | "addrvec": [ 2025-09-29 06:38:53.892456 | orchestrator | { 2025-09-29 06:38:53.892464 | orchestrator | "type": "v2", 2025-09-29 06:38:53.892471 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-29 06:38:53.892479 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892487 | orchestrator | }, 2025-09-29 06:38:53.892495 | orchestrator | { 2025-09-29 06:38:53.892503 | orchestrator | "type": "v1", 2025-09-29 06:38:53.892510 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-29 06:38:53.892518 | orchestrator | "nonce": 0 2025-09-29 06:38:53.892526 | orchestrator | } 2025-09-29 06:38:53.892534 | orchestrator | ] 2025-09-29 06:38:53.892542 | orchestrator | }, 2025-09-29 06:38:53.892549 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-29 06:38:53.892558 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-29 06:38:53.892565 | orchestrator | "priority": 0, 2025-09-29 06:38:53.892573 | orchestrator | "weight": 0, 2025-09-29 06:38:53.892581 | orchestrator | "crush_location": "{}" 2025-09-29 06:38:53.892589 | orchestrator | } 2025-09-29 06:38:53.892597 | orchestrator | ] 2025-09-29 06:38:53.892605 | orchestrator | } 2025-09-29 06:38:53.892613 | orchestrator | } 2025-09-29 06:38:53.892632 | orchestrator | 2025-09-29 06:38:53.892641 | orchestrator | # Ceph free space status 2025-09-29 06:38:53.892649 | orchestrator | 2025-09-29 06:38:53.892657 | orchestrator | + echo 2025-09-29 06:38:53.892665 | orchestrator | + echo '# Ceph free space status' 2025-09-29 06:38:53.892673 | orchestrator | + echo 2025-09-29 06:38:53.892681 | orchestrator | + ceph df 2025-09-29 06:38:54.420022 | orchestrator | --- RAW STORAGE --- 2025-09-29 06:38:54.420165 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-29 06:38:54.420197 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-29 06:38:54.420209 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-09-29 06:38:54.420221 | orchestrator | 2025-09-29 06:38:54.420233 | orchestrator | --- POOLS --- 2025-09-29 06:38:54.420245 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-29 06:38:54.420258 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-09-29 06:38:54.420269 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-29 06:38:54.420280 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-29 06:38:54.420317 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-29 06:38:54.420348 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-29 06:38:54.420368 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-29 06:38:54.420386 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-29 06:38:54.420406 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-29 06:38:54.420424 | orchestrator | .rgw.root 9 32 3.5 KiB 7 56 KiB 0 53 GiB 2025-09-29 06:38:54.420440 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-29 06:38:54.420451 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-29 06:38:54.420462 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-09-29 06:38:54.420473 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-29 06:38:54.420484 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-29 06:38:54.451358 | orchestrator | ++ semver latest 5.0.0 2025-09-29 06:38:54.492443 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-29 06:38:54.492536 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-29 06:38:54.492552 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-29 06:38:54.492572 | orchestrator | + osism apply facts 2025-09-29 06:39:06.326119 | orchestrator | 2025-09-29 06:39:06 | INFO  | Task 59b2fffb-93c1-4d42-b305-d3dbd0e8b115 (facts) was prepared for execution. 2025-09-29 06:39:06.326230 | orchestrator | 2025-09-29 06:39:06 | INFO  | It takes a moment until task 59b2fffb-93c1-4d42-b305-d3dbd0e8b115 (facts) has been started and output is visible here. 2025-09-29 06:39:20.466216 | orchestrator | 2025-09-29 06:39:20.466321 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-29 06:39:20.466331 | orchestrator | 2025-09-29 06:39:20.466336 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-29 06:39:20.466342 | orchestrator | Monday 29 September 2025 06:39:10 +0000 (0:00:00.313) 0:00:00.313 ****** 2025-09-29 06:39:20.466346 | orchestrator | ok: [testbed-manager] 2025-09-29 06:39:20.466352 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:20.466356 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:20.466361 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:20.466365 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:39:20.466370 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:39:20.466374 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:39:20.466378 | orchestrator | 2025-09-29 06:39:20.466383 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-29 06:39:20.466401 | orchestrator | Monday 29 September 2025 06:39:11 +0000 (0:00:01.479) 0:00:01.793 ****** 2025-09-29 06:39:20.466406 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:39:20.466411 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:20.466415 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:39:20.466419 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:39:20.466423 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:39:20.466427 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:39:20.466431 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:39:20.466435 | orchestrator | 2025-09-29 06:39:20.466439 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-29 06:39:20.466444 | orchestrator | 2025-09-29 06:39:20.466448 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-29 06:39:20.466452 | orchestrator | Monday 29 September 2025 06:39:13 +0000 (0:00:01.272) 0:00:03.065 ****** 2025-09-29 06:39:20.466456 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:20.466460 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:20.466464 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:20.466468 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:39:20.466472 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:39:20.466492 | orchestrator | ok: [testbed-manager] 2025-09-29 06:39:20.466496 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:39:20.466500 | orchestrator | 2025-09-29 06:39:20.466504 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-29 06:39:20.466508 | orchestrator | 2025-09-29 06:39:20.466513 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-29 06:39:20.466517 | orchestrator | Monday 29 September 2025 06:39:19 +0000 (0:00:06.289) 0:00:09.355 ****** 2025-09-29 06:39:20.466521 | orchestrator | skipping: [testbed-manager] 2025-09-29 06:39:20.466525 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:20.466529 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:39:20.466533 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:39:20.466537 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:39:20.466541 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:39:20.466545 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:39:20.466549 | orchestrator | 2025-09-29 06:39:20.466553 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:39:20.466558 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466563 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466567 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466571 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466575 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466579 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466584 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:20.466588 | orchestrator | 2025-09-29 06:39:20.466592 | orchestrator | 2025-09-29 06:39:20.466596 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:39:20.466600 | orchestrator | Monday 29 September 2025 06:39:20 +0000 (0:00:00.540) 0:00:09.895 ****** 2025-09-29 06:39:20.466604 | orchestrator | =============================================================================== 2025-09-29 06:39:20.466608 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.29s 2025-09-29 06:39:20.466612 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.48s 2025-09-29 06:39:20.466617 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-09-29 06:39:20.466621 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-29 06:39:20.769345 | orchestrator | + osism validate ceph-mons 2025-09-29 06:39:52.004023 | orchestrator | 2025-09-29 06:39:52.004137 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-29 06:39:52.004147 | orchestrator | 2025-09-29 06:39:52.004154 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-29 06:39:52.004161 | orchestrator | Monday 29 September 2025 06:39:37 +0000 (0:00:00.484) 0:00:00.484 ****** 2025-09-29 06:39:52.004168 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004174 | orchestrator | 2025-09-29 06:39:52.004181 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-29 06:39:52.004187 | orchestrator | Monday 29 September 2025 06:39:37 +0000 (0:00:00.643) 0:00:01.127 ****** 2025-09-29 06:39:52.004194 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004219 | orchestrator | 2025-09-29 06:39:52.004225 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-29 06:39:52.004232 | orchestrator | Monday 29 September 2025 06:39:38 +0000 (0:00:00.821) 0:00:01.949 ****** 2025-09-29 06:39:52.004239 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004247 | orchestrator | 2025-09-29 06:39:52.004253 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-29 06:39:52.004260 | orchestrator | Monday 29 September 2025 06:39:38 +0000 (0:00:00.244) 0:00:02.194 ****** 2025-09-29 06:39:52.004266 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004273 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:52.004280 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:52.004287 | orchestrator | 2025-09-29 06:39:52.004291 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-29 06:39:52.004295 | orchestrator | Monday 29 September 2025 06:39:39 +0000 (0:00:00.297) 0:00:02.492 ****** 2025-09-29 06:39:52.004299 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:52.004302 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:52.004306 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004310 | orchestrator | 2025-09-29 06:39:52.004314 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-29 06:39:52.004318 | orchestrator | Monday 29 September 2025 06:39:40 +0000 (0:00:01.052) 0:00:03.545 ****** 2025-09-29 06:39:52.004323 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004327 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:39:52.004330 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:39:52.004334 | orchestrator | 2025-09-29 06:39:52.004338 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-29 06:39:52.004341 | orchestrator | Monday 29 September 2025 06:39:40 +0000 (0:00:00.298) 0:00:03.844 ****** 2025-09-29 06:39:52.004345 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004349 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:52.004353 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:52.004356 | orchestrator | 2025-09-29 06:39:52.004360 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:39:52.004364 | orchestrator | Monday 29 September 2025 06:39:40 +0000 (0:00:00.494) 0:00:04.338 ****** 2025-09-29 06:39:52.004367 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004371 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:52.004375 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:52.004378 | orchestrator | 2025-09-29 06:39:52.004382 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-29 06:39:52.004386 | orchestrator | Monday 29 September 2025 06:39:41 +0000 (0:00:00.285) 0:00:04.623 ****** 2025-09-29 06:39:52.004389 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004393 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:39:52.004397 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:39:52.004400 | orchestrator | 2025-09-29 06:39:52.004404 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-29 06:39:52.004408 | orchestrator | Monday 29 September 2025 06:39:41 +0000 (0:00:00.286) 0:00:04.910 ****** 2025-09-29 06:39:52.004412 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004415 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:39:52.004419 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:39:52.004423 | orchestrator | 2025-09-29 06:39:52.004426 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:39:52.004430 | orchestrator | Monday 29 September 2025 06:39:41 +0000 (0:00:00.288) 0:00:05.199 ****** 2025-09-29 06:39:52.004434 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004437 | orchestrator | 2025-09-29 06:39:52.004441 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:39:52.004445 | orchestrator | Monday 29 September 2025 06:39:42 +0000 (0:00:00.251) 0:00:05.451 ****** 2025-09-29 06:39:52.004449 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004452 | orchestrator | 2025-09-29 06:39:52.004456 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:39:52.004465 | orchestrator | Monday 29 September 2025 06:39:42 +0000 (0:00:00.439) 0:00:05.891 ****** 2025-09-29 06:39:52.004468 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004472 | orchestrator | 2025-09-29 06:39:52.004476 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:52.004494 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.627) 0:00:06.518 ****** 2025-09-29 06:39:52.004498 | orchestrator | 2025-09-29 06:39:52.004502 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:52.004505 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.064) 0:00:06.583 ****** 2025-09-29 06:39:52.004509 | orchestrator | 2025-09-29 06:39:52.004513 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:52.004517 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.064) 0:00:06.648 ****** 2025-09-29 06:39:52.004520 | orchestrator | 2025-09-29 06:39:52.004524 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:39:52.004528 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.067) 0:00:06.716 ****** 2025-09-29 06:39:52.004532 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004535 | orchestrator | 2025-09-29 06:39:52.004539 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-29 06:39:52.004543 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.245) 0:00:06.961 ****** 2025-09-29 06:39:52.004547 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004550 | orchestrator | 2025-09-29 06:39:52.004566 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-29 06:39:52.004570 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.246) 0:00:07.208 ****** 2025-09-29 06:39:52.004574 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004578 | orchestrator | 2025-09-29 06:39:52.004581 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-29 06:39:52.004585 | orchestrator | Monday 29 September 2025 06:39:43 +0000 (0:00:00.130) 0:00:07.338 ****** 2025-09-29 06:39:52.004589 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:39:52.004594 | orchestrator | 2025-09-29 06:39:52.004598 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-29 06:39:52.004602 | orchestrator | Monday 29 September 2025 06:39:45 +0000 (0:00:01.622) 0:00:08.960 ****** 2025-09-29 06:39:52.004606 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004611 | orchestrator | 2025-09-29 06:39:52.004615 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-29 06:39:52.004620 | orchestrator | Monday 29 September 2025 06:39:45 +0000 (0:00:00.275) 0:00:09.235 ****** 2025-09-29 06:39:52.004624 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004628 | orchestrator | 2025-09-29 06:39:52.004632 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-29 06:39:52.004637 | orchestrator | Monday 29 September 2025 06:39:45 +0000 (0:00:00.121) 0:00:09.357 ****** 2025-09-29 06:39:52.004641 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004646 | orchestrator | 2025-09-29 06:39:52.004653 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-29 06:39:52.004657 | orchestrator | Monday 29 September 2025 06:39:46 +0000 (0:00:00.292) 0:00:09.649 ****** 2025-09-29 06:39:52.004662 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004666 | orchestrator | 2025-09-29 06:39:52.004671 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-29 06:39:52.004675 | orchestrator | Monday 29 September 2025 06:39:46 +0000 (0:00:00.384) 0:00:10.033 ****** 2025-09-29 06:39:52.004679 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004683 | orchestrator | 2025-09-29 06:39:52.004688 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-29 06:39:52.004692 | orchestrator | Monday 29 September 2025 06:39:46 +0000 (0:00:00.100) 0:00:10.134 ****** 2025-09-29 06:39:52.004697 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004705 | orchestrator | 2025-09-29 06:39:52.004709 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-29 06:39:52.004714 | orchestrator | Monday 29 September 2025 06:39:46 +0000 (0:00:00.120) 0:00:10.255 ****** 2025-09-29 06:39:52.004718 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004723 | orchestrator | 2025-09-29 06:39:52.004727 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-29 06:39:52.004731 | orchestrator | Monday 29 September 2025 06:39:46 +0000 (0:00:00.092) 0:00:10.347 ****** 2025-09-29 06:39:52.004736 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:39:52.004740 | orchestrator | 2025-09-29 06:39:52.004745 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-29 06:39:52.004749 | orchestrator | Monday 29 September 2025 06:39:48 +0000 (0:00:01.349) 0:00:11.697 ****** 2025-09-29 06:39:52.004753 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004758 | orchestrator | 2025-09-29 06:39:52.004762 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-29 06:39:52.004767 | orchestrator | Monday 29 September 2025 06:39:48 +0000 (0:00:00.260) 0:00:11.958 ****** 2025-09-29 06:39:52.004771 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004776 | orchestrator | 2025-09-29 06:39:52.004780 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-29 06:39:52.004785 | orchestrator | Monday 29 September 2025 06:39:48 +0000 (0:00:00.127) 0:00:12.085 ****** 2025-09-29 06:39:52.004789 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:39:52.004793 | orchestrator | 2025-09-29 06:39:52.004798 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-29 06:39:52.004802 | orchestrator | Monday 29 September 2025 06:39:48 +0000 (0:00:00.120) 0:00:12.205 ****** 2025-09-29 06:39:52.004806 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004811 | orchestrator | 2025-09-29 06:39:52.004815 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-29 06:39:52.004819 | orchestrator | Monday 29 September 2025 06:39:48 +0000 (0:00:00.122) 0:00:12.327 ****** 2025-09-29 06:39:52.004824 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004828 | orchestrator | 2025-09-29 06:39:52.004832 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-29 06:39:52.004837 | orchestrator | Monday 29 September 2025 06:39:49 +0000 (0:00:00.123) 0:00:12.451 ****** 2025-09-29 06:39:52.004841 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004846 | orchestrator | 2025-09-29 06:39:52.004850 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-29 06:39:52.004855 | orchestrator | Monday 29 September 2025 06:39:49 +0000 (0:00:00.239) 0:00:12.691 ****** 2025-09-29 06:39:52.004859 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:39:52.004863 | orchestrator | 2025-09-29 06:39:52.004868 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:39:52.004872 | orchestrator | Monday 29 September 2025 06:39:49 +0000 (0:00:00.340) 0:00:13.031 ****** 2025-09-29 06:39:52.004876 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004881 | orchestrator | 2025-09-29 06:39:52.004885 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:39:52.004890 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:01.671) 0:00:14.703 ****** 2025-09-29 06:39:52.004894 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004898 | orchestrator | 2025-09-29 06:39:52.004903 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:39:52.004907 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:00.254) 0:00:14.958 ****** 2025-09-29 06:39:52.004911 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:52.004916 | orchestrator | 2025-09-29 06:39:52.004923 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:53.720639 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:00.234) 0:00:15.192 ****** 2025-09-29 06:39:53.720758 | orchestrator | 2025-09-29 06:39:53.720767 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:53.720773 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:00.064) 0:00:15.257 ****** 2025-09-29 06:39:53.720778 | orchestrator | 2025-09-29 06:39:53.720783 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:39:53.720789 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:00.067) 0:00:15.324 ****** 2025-09-29 06:39:53.720794 | orchestrator | 2025-09-29 06:39:53.720798 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-29 06:39:53.720803 | orchestrator | Monday 29 September 2025 06:39:51 +0000 (0:00:00.068) 0:00:15.393 ****** 2025-09-29 06:39:53.720809 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:39:53.720814 | orchestrator | 2025-09-29 06:39:53.720818 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:39:53.720823 | orchestrator | Monday 29 September 2025 06:39:53 +0000 (0:00:01.143) 0:00:16.536 ****** 2025-09-29 06:39:53.720828 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-29 06:39:53.720833 | orchestrator |  "msg": [ 2025-09-29 06:39:53.720839 | orchestrator |  "Validator run completed.", 2025-09-29 06:39:53.720844 | orchestrator |  "You can find the report file here:", 2025-09-29 06:39:53.720849 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-29T06:39:37+00:00-report.json", 2025-09-29 06:39:53.720855 | orchestrator |  "on the following host:", 2025-09-29 06:39:53.720860 | orchestrator |  "testbed-manager" 2025-09-29 06:39:53.720864 | orchestrator |  ] 2025-09-29 06:39:53.720869 | orchestrator | } 2025-09-29 06:39:53.720874 | orchestrator | 2025-09-29 06:39:53.720879 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:39:53.720885 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-29 06:39:53.720890 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:53.720895 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:39:53.720900 | orchestrator | 2025-09-29 06:39:53.720905 | orchestrator | 2025-09-29 06:39:53.720910 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:39:53.720916 | orchestrator | Monday 29 September 2025 06:39:53 +0000 (0:00:00.348) 0:00:16.885 ****** 2025-09-29 06:39:53.720920 | orchestrator | =============================================================================== 2025-09-29 06:39:53.720925 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2025-09-29 06:39:53.720930 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.62s 2025-09-29 06:39:53.720938 | orchestrator | Gather status data ------------------------------------------------------ 1.35s 2025-09-29 06:39:53.720943 | orchestrator | Write report file ------------------------------------------------------- 1.14s 2025-09-29 06:39:53.720947 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2025-09-29 06:39:53.720952 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-09-29 06:39:53.720957 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-09-29 06:39:53.720962 | orchestrator | Aggregate test results step three --------------------------------------- 0.63s 2025-09-29 06:39:53.720966 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-09-29 06:39:53.720971 | orchestrator | Aggregate test results step two ----------------------------------------- 0.44s 2025-09-29 06:39:53.720976 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.38s 2025-09-29 06:39:53.720985 | orchestrator | Print report file information ------------------------------------------- 0.35s 2025-09-29 06:39:53.720990 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.34s 2025-09-29 06:39:53.720995 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2025-09-29 06:39:53.721000 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-09-29 06:39:53.721004 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.29s 2025-09-29 06:39:53.721009 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-09-29 06:39:53.721014 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2025-09-29 06:39:53.721019 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-09-29 06:39:53.721023 | orchestrator | Set quorum test data ---------------------------------------------------- 0.28s 2025-09-29 06:39:53.912041 | orchestrator | + osism validate ceph-mgrs 2025-09-29 06:40:24.367533 | orchestrator | 2025-09-29 06:40:24.367658 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-29 06:40:24.367684 | orchestrator | 2025-09-29 06:40:24.367704 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-29 06:40:24.367725 | orchestrator | Monday 29 September 2025 06:40:09 +0000 (0:00:00.413) 0:00:00.413 ****** 2025-09-29 06:40:24.367743 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.367758 | orchestrator | 2025-09-29 06:40:24.367770 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-29 06:40:24.367781 | orchestrator | Monday 29 September 2025 06:40:10 +0000 (0:00:00.583) 0:00:00.997 ****** 2025-09-29 06:40:24.367791 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.367802 | orchestrator | 2025-09-29 06:40:24.367833 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-29 06:40:24.367845 | orchestrator | Monday 29 September 2025 06:40:11 +0000 (0:00:00.738) 0:00:01.736 ****** 2025-09-29 06:40:24.367856 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.367868 | orchestrator | 2025-09-29 06:40:24.367879 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-29 06:40:24.367890 | orchestrator | Monday 29 September 2025 06:40:11 +0000 (0:00:00.231) 0:00:01.967 ****** 2025-09-29 06:40:24.367900 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.367911 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:40:24.367922 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:40:24.367932 | orchestrator | 2025-09-29 06:40:24.367943 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-29 06:40:24.367954 | orchestrator | Monday 29 September 2025 06:40:11 +0000 (0:00:00.291) 0:00:02.258 ****** 2025-09-29 06:40:24.367965 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:40:24.367975 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.367986 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:40:24.367997 | orchestrator | 2025-09-29 06:40:24.368007 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-29 06:40:24.368018 | orchestrator | Monday 29 September 2025 06:40:12 +0000 (0:00:01.003) 0:00:03.262 ****** 2025-09-29 06:40:24.368035 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368079 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:40:24.368100 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:40:24.368120 | orchestrator | 2025-09-29 06:40:24.368139 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-29 06:40:24.368159 | orchestrator | Monday 29 September 2025 06:40:13 +0000 (0:00:00.268) 0:00:03.530 ****** 2025-09-29 06:40:24.368179 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.368199 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:40:24.368218 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:40:24.368234 | orchestrator | 2025-09-29 06:40:24.368247 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:40:24.368282 | orchestrator | Monday 29 September 2025 06:40:13 +0000 (0:00:00.495) 0:00:04.025 ****** 2025-09-29 06:40:24.368295 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.368307 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:40:24.368349 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:40:24.368369 | orchestrator | 2025-09-29 06:40:24.368388 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-29 06:40:24.368407 | orchestrator | Monday 29 September 2025 06:40:13 +0000 (0:00:00.308) 0:00:04.334 ****** 2025-09-29 06:40:24.368427 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368446 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:40:24.368463 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:40:24.368474 | orchestrator | 2025-09-29 06:40:24.368485 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-29 06:40:24.368496 | orchestrator | Monday 29 September 2025 06:40:14 +0000 (0:00:00.296) 0:00:04.631 ****** 2025-09-29 06:40:24.368507 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.368517 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:40:24.368528 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:40:24.368544 | orchestrator | 2025-09-29 06:40:24.368561 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:40:24.368580 | orchestrator | Monday 29 September 2025 06:40:14 +0000 (0:00:00.280) 0:00:04.911 ****** 2025-09-29 06:40:24.368599 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368619 | orchestrator | 2025-09-29 06:40:24.368634 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:40:24.368645 | orchestrator | Monday 29 September 2025 06:40:14 +0000 (0:00:00.250) 0:00:05.161 ****** 2025-09-29 06:40:24.368656 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368667 | orchestrator | 2025-09-29 06:40:24.368678 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:40:24.368689 | orchestrator | Monday 29 September 2025 06:40:15 +0000 (0:00:00.447) 0:00:05.609 ****** 2025-09-29 06:40:24.368699 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368710 | orchestrator | 2025-09-29 06:40:24.368721 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.368732 | orchestrator | Monday 29 September 2025 06:40:15 +0000 (0:00:00.676) 0:00:06.286 ****** 2025-09-29 06:40:24.368743 | orchestrator | 2025-09-29 06:40:24.368754 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.368764 | orchestrator | Monday 29 September 2025 06:40:15 +0000 (0:00:00.075) 0:00:06.361 ****** 2025-09-29 06:40:24.368775 | orchestrator | 2025-09-29 06:40:24.368786 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.368797 | orchestrator | Monday 29 September 2025 06:40:15 +0000 (0:00:00.069) 0:00:06.430 ****** 2025-09-29 06:40:24.368807 | orchestrator | 2025-09-29 06:40:24.368818 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:40:24.368829 | orchestrator | Monday 29 September 2025 06:40:15 +0000 (0:00:00.068) 0:00:06.498 ****** 2025-09-29 06:40:24.368840 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368851 | orchestrator | 2025-09-29 06:40:24.368862 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-29 06:40:24.368873 | orchestrator | Monday 29 September 2025 06:40:16 +0000 (0:00:00.245) 0:00:06.744 ****** 2025-09-29 06:40:24.368883 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.368894 | orchestrator | 2025-09-29 06:40:24.368925 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-29 06:40:24.368937 | orchestrator | Monday 29 September 2025 06:40:16 +0000 (0:00:00.289) 0:00:07.034 ****** 2025-09-29 06:40:24.368948 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.368959 | orchestrator | 2025-09-29 06:40:24.368969 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-29 06:40:24.368980 | orchestrator | Monday 29 September 2025 06:40:16 +0000 (0:00:00.120) 0:00:07.154 ****** 2025-09-29 06:40:24.369001 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:40:24.369012 | orchestrator | 2025-09-29 06:40:24.369023 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-29 06:40:24.369034 | orchestrator | Monday 29 September 2025 06:40:18 +0000 (0:00:02.070) 0:00:09.225 ****** 2025-09-29 06:40:24.369045 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.369075 | orchestrator | 2025-09-29 06:40:24.369086 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-29 06:40:24.369096 | orchestrator | Monday 29 September 2025 06:40:18 +0000 (0:00:00.271) 0:00:09.496 ****** 2025-09-29 06:40:24.369107 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.369117 | orchestrator | 2025-09-29 06:40:24.369128 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-29 06:40:24.369139 | orchestrator | Monday 29 September 2025 06:40:19 +0000 (0:00:00.305) 0:00:09.802 ****** 2025-09-29 06:40:24.369149 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.369160 | orchestrator | 2025-09-29 06:40:24.369171 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-29 06:40:24.369182 | orchestrator | Monday 29 September 2025 06:40:19 +0000 (0:00:00.131) 0:00:09.933 ****** 2025-09-29 06:40:24.369192 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:40:24.369203 | orchestrator | 2025-09-29 06:40:24.369214 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-29 06:40:24.369224 | orchestrator | Monday 29 September 2025 06:40:19 +0000 (0:00:00.337) 0:00:10.271 ****** 2025-09-29 06:40:24.369235 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.369246 | orchestrator | 2025-09-29 06:40:24.369263 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-29 06:40:24.369274 | orchestrator | Monday 29 September 2025 06:40:20 +0000 (0:00:00.277) 0:00:10.549 ****** 2025-09-29 06:40:24.369285 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:40:24.369295 | orchestrator | 2025-09-29 06:40:24.369306 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:40:24.369317 | orchestrator | Monday 29 September 2025 06:40:20 +0000 (0:00:00.254) 0:00:10.803 ****** 2025-09-29 06:40:24.369327 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.369338 | orchestrator | 2025-09-29 06:40:24.369349 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:40:24.369359 | orchestrator | Monday 29 September 2025 06:40:21 +0000 (0:00:01.194) 0:00:11.998 ****** 2025-09-29 06:40:24.369370 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.369381 | orchestrator | 2025-09-29 06:40:24.369392 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:40:24.369402 | orchestrator | Monday 29 September 2025 06:40:21 +0000 (0:00:00.265) 0:00:12.263 ****** 2025-09-29 06:40:24.369413 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.369423 | orchestrator | 2025-09-29 06:40:24.369434 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.369445 | orchestrator | Monday 29 September 2025 06:40:21 +0000 (0:00:00.252) 0:00:12.516 ****** 2025-09-29 06:40:24.369455 | orchestrator | 2025-09-29 06:40:24.369466 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.369477 | orchestrator | Monday 29 September 2025 06:40:22 +0000 (0:00:00.070) 0:00:12.586 ****** 2025-09-29 06:40:24.369487 | orchestrator | 2025-09-29 06:40:24.369498 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:24.369509 | orchestrator | Monday 29 September 2025 06:40:22 +0000 (0:00:00.069) 0:00:12.656 ****** 2025-09-29 06:40:24.369519 | orchestrator | 2025-09-29 06:40:24.369530 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-29 06:40:24.369541 | orchestrator | Monday 29 September 2025 06:40:22 +0000 (0:00:00.073) 0:00:12.729 ****** 2025-09-29 06:40:24.369551 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:24.369568 | orchestrator | 2025-09-29 06:40:24.369580 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:40:24.369590 | orchestrator | Monday 29 September 2025 06:40:23 +0000 (0:00:01.587) 0:00:14.317 ****** 2025-09-29 06:40:24.369601 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-29 06:40:24.369612 | orchestrator |  "msg": [ 2025-09-29 06:40:24.369623 | orchestrator |  "Validator run completed.", 2025-09-29 06:40:24.369634 | orchestrator |  "You can find the report file here:", 2025-09-29 06:40:24.369644 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-29T06:40:10+00:00-report.json", 2025-09-29 06:40:24.369657 | orchestrator |  "on the following host:", 2025-09-29 06:40:24.369668 | orchestrator |  "testbed-manager" 2025-09-29 06:40:24.369679 | orchestrator |  ] 2025-09-29 06:40:24.369690 | orchestrator | } 2025-09-29 06:40:24.369701 | orchestrator | 2025-09-29 06:40:24.369711 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:40:24.369723 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-29 06:40:24.369735 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:40:24.369753 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:40:24.670425 | orchestrator | 2025-09-29 06:40:24.670511 | orchestrator | 2025-09-29 06:40:24.670521 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:40:24.670529 | orchestrator | Monday 29 September 2025 06:40:24 +0000 (0:00:00.558) 0:00:14.875 ****** 2025-09-29 06:40:24.670535 | orchestrator | =============================================================================== 2025-09-29 06:40:24.670541 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.07s 2025-09-29 06:40:24.670548 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2025-09-29 06:40:24.670554 | orchestrator | Aggregate test results step one ----------------------------------------- 1.19s 2025-09-29 06:40:24.670560 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-09-29 06:40:24.670566 | orchestrator | Create report output directory ------------------------------------------ 0.74s 2025-09-29 06:40:24.670573 | orchestrator | Aggregate test results step three --------------------------------------- 0.68s 2025-09-29 06:40:24.670579 | orchestrator | Get timestamp for report file ------------------------------------------- 0.58s 2025-09-29 06:40:24.670586 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-09-29 06:40:24.670593 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-09-29 06:40:24.670599 | orchestrator | Aggregate test results step two ----------------------------------------- 0.45s 2025-09-29 06:40:24.670606 | orchestrator | Pass test if required mgr modules are enabled --------------------------- 0.34s 2025-09-29 06:40:24.670612 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-29 06:40:24.670618 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2025-09-29 06:40:24.670625 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-09-29 06:40:24.670631 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-29 06:40:24.670638 | orchestrator | Fail due to missing containers ------------------------------------------ 0.29s 2025-09-29 06:40:24.670645 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.28s 2025-09-29 06:40:24.670651 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-09-29 06:40:24.670657 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2025-09-29 06:40:24.670686 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-09-29 06:40:24.970781 | orchestrator | + osism validate ceph-osds 2025-09-29 06:40:44.458812 | orchestrator | 2025-09-29 06:40:44.458935 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-29 06:40:44.458959 | orchestrator | 2025-09-29 06:40:44.458975 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-29 06:40:44.458993 | orchestrator | Monday 29 September 2025 06:40:40 +0000 (0:00:00.385) 0:00:00.385 ****** 2025-09-29 06:40:44.459009 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:44.459025 | orchestrator | 2025-09-29 06:40:44.459041 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-29 06:40:44.459160 | orchestrator | Monday 29 September 2025 06:40:41 +0000 (0:00:00.592) 0:00:00.978 ****** 2025-09-29 06:40:44.459177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:44.459192 | orchestrator | 2025-09-29 06:40:44.459208 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-29 06:40:44.459225 | orchestrator | Monday 29 September 2025 06:40:41 +0000 (0:00:00.307) 0:00:01.285 ****** 2025-09-29 06:40:44.459241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:40:44.459259 | orchestrator | 2025-09-29 06:40:44.459300 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-29 06:40:44.459322 | orchestrator | Monday 29 September 2025 06:40:42 +0000 (0:00:00.761) 0:00:02.046 ****** 2025-09-29 06:40:44.459341 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:44.459361 | orchestrator | 2025-09-29 06:40:44.459381 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-29 06:40:44.459401 | orchestrator | Monday 29 September 2025 06:40:42 +0000 (0:00:00.108) 0:00:02.155 ****** 2025-09-29 06:40:44.459421 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:44.459440 | orchestrator | 2025-09-29 06:40:44.459460 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-29 06:40:44.459478 | orchestrator | Monday 29 September 2025 06:40:42 +0000 (0:00:00.127) 0:00:02.282 ****** 2025-09-29 06:40:44.459498 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:44.459519 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:40:44.459539 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:40:44.459558 | orchestrator | 2025-09-29 06:40:44.459578 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-29 06:40:44.459598 | orchestrator | Monday 29 September 2025 06:40:43 +0000 (0:00:00.258) 0:00:02.541 ****** 2025-09-29 06:40:44.459617 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:44.459637 | orchestrator | 2025-09-29 06:40:44.459657 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-29 06:40:44.459677 | orchestrator | Monday 29 September 2025 06:40:43 +0000 (0:00:00.137) 0:00:02.678 ****** 2025-09-29 06:40:44.459693 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:44.459708 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:44.459723 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:44.459737 | orchestrator | 2025-09-29 06:40:44.459753 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-29 06:40:44.459769 | orchestrator | Monday 29 September 2025 06:40:43 +0000 (0:00:00.274) 0:00:02.953 ****** 2025-09-29 06:40:44.459786 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:44.459801 | orchestrator | 2025-09-29 06:40:44.459816 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:40:44.459834 | orchestrator | Monday 29 September 2025 06:40:43 +0000 (0:00:00.413) 0:00:03.366 ****** 2025-09-29 06:40:44.459848 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:44.459862 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:44.459875 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:44.459890 | orchestrator | 2025-09-29 06:40:44.459905 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-29 06:40:44.459919 | orchestrator | Monday 29 September 2025 06:40:44 +0000 (0:00:00.367) 0:00:03.733 ****** 2025-09-29 06:40:44.459967 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e38f41dd359b0765339dfba7f2eb262a301ba78d7f148bb23fb01397c6322458', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.460000 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f1508d97baa9b3fba8e1921668f90b1b2e05519887722ffdb2fbbed0317d0e31', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.460019 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bb8ea478e1d84d55ad7ef142ffa133617c80ecaca88a8da6124daeb6c1143d8d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-29 06:40:44.460035 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6a2dc16a36edf24fa3fd2c66c8e096ef14cc74ddeb70c205cb4a434d4d70acca', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.460090 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e3ce4d5b5fdca2cdebbde9a455d93fd6f2e02ae6794389892ce81ebd559a14b6', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.460135 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c0cf24abb22ec5987d5b4532a70acc2463c7b92824fd0691cfd8cd1949893c90', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-29 06:40:44.460156 | orchestrator | skipping: [testbed-node-3] => (item={'id': '96a9a2418a550a68eed99d83a0c454b745753bc7ebe5e7ed2c4713d89f6caa12', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.460170 | orchestrator | skipping: [testbed-node-3] => (item={'id': '157426bd993bae16eb4c1924f30a8c702ee2b954742e96fe0ca7f7af89d284f1', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.460184 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63c2b3e18c56eb4ae232b9b625d8ed076145dd377f5ab5f0be3b769f519bbaad', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.460203 | orchestrator | skipping: [testbed-node-3] => (item={'id': '24eb03661ae3344ae7343f25d2529326f9fd29aca9248997f33cc58f9150fb21', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-29 06:40:44.460218 | orchestrator | skipping: [testbed-node-3] => (item={'id': '783b024e341057f9a91f15240b3510dcbc56d0a9e7dcf6072110d1ca8ad7798f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-29 06:40:44.460233 | orchestrator | skipping: [testbed-node-3] => (item={'id': '07863001b9f17f3075380dad31840cc0258d459f5711176ee8009abd10af1dc5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-29 06:40:44.460248 | orchestrator | ok: [testbed-node-3] => (item={'id': '637cf5fbfe547561533309b9a82b301da042595b827f24e2b9b2e0dddff7e2a4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:44.460263 | orchestrator | ok: [testbed-node-3] => (item={'id': 'f4aa9951095b81588e7c8edb808e39caece0c582bd420eec4fb9763d5b1a62ea', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:44.460291 | orchestrator | skipping: [testbed-node-3] => (item={'id': '40dcdfdcfacaa8663029339dea9cfbfc83aaa9925d5946c4cc3619d3e2306e1b', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-29 06:40:44.460306 | orchestrator | skipping: [testbed-node-3] => (item={'id': '737cd32a8a34ac46d118163d247fe223dd58ce7205fe3161fc0ee27fd863606c', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-29 06:40:44.460320 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e74c2bf8d327e1d17a28f012b481022e0bc45eac971e43acaf7de2aeee7b558c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-29 06:40:44.460335 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8596367397d57b4d304a1a3032270f014901e948f5a545416f39b6beb261349c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-29 06:40:44.460349 | orchestrator | skipping: [testbed-node-3] => (item={'id': '95f744f077a2c92614d69742f528f070a1af246b9f2a2d3253fa567fa46e35f8', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:44.460371 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1a262777bb711a9dede8f405a7c8d055e65540b18efbd62d592791ef5a56a0d1', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:44.460386 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a71aa32553b0622a2463c484107c3312f7c8f35c9455dfb122d5caa401c5a24', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.460411 | orchestrator | skipping: [testbed-node-4] => (item={'id': '222f482d2cea0a1a08a92389b7d88c0a62f217d48d4383199a6550972bbe7388', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.674984 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c67be36765d7eaedf5a66821a6016286b4fb891b33be3157e958a8b47898420b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-29 06:40:44.675069 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c7b1ddf8ce187e3c63150034a4732653fd123d5bbacde5fe732196ac89dbc389', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.675078 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6945a7608f3fd593b7763f0d0b9ebceee8a8df5d3999357f0ae86cc33f312f23', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.675083 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b3fb770568e230ec75b91e05492238e3f862f37816261043479b8e7299fc0fb6', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-29 06:40:44.675088 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ccded51a3b91a3ddbd0350c9194f5d4b6a6d49898aab6ce2505a42b9e746152e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675092 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef588eae9ed240a8f73a434fe8c4019a2519deabe231d7fe815c6ebfe363d017', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675112 | orchestrator | skipping: [testbed-node-4] => (item={'id': '42879683321a6be12b5fab736d9ef4bd0f460a41fe0a095bc98ecb8cfd31f6a9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675119 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ca50d218bdc5c0e08bf4aa5d480d8839fe758f4eae648f9f790ac4dc3906c0c6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-29 06:40:44.675125 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22d543cce159a39265396b256f2873d9b02fd8adec6babffafc3ccc9c02afa62', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-29 06:40:44.675132 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5bf11e729b8a2228d4bb4b402441d218233f34cd4087221f10109d0b187617ae', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-29 06:40:44.675138 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a4b82ead50371e3759f30ef551cd715a849c4176874bca47b7ffd4797ba164f3', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:44.675145 | orchestrator | ok: [testbed-node-4] => (item={'id': '44938c7f31301d55299ea93c13b09196aa4631450c8d1ec44c3ce083255d4730', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:44.675151 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c805581877d53c0c770698dcf798e6cb5f70a2135c717124161bff514884826d', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-29 06:40:44.675158 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd36e07670a48e8871747e12a3f78f1c42d67f84f64420e7bcc6e7fc11a59918f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-29 06:40:44.675165 | orchestrator | skipping: [testbed-node-4] => (item={'id': '345b4cc4f6b76a647153b40c08cde0bff88b4e0a678cc6834ce84786f894b682', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-29 06:40:44.675200 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5c3aa0ecf79a25d18aa473ab9a6ce7d5a0e8e43ffd9ec467e7bc6b81035199a', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-29 06:40:44.675209 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53e02d05dccfaa56215b6e1a8b7470953e9d82ccc7ed4ebad4fd980e6b19017e', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:44.675216 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8de244653e717aa6310ccabdfa0fe492409ad9f06af109a7bd341c5333db1cd4', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:44.675223 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d3f871606f35ef42b17d689f29ae2045ef7691ac7976abb4aef4c5727964e38', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.675229 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f79e6025c0109f8669a0482f14aee0b255ea27cb46b0ac6b1600e1d72ad31c7', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-29 06:40:44.675240 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8ede05ca97058f1c7070d66b2a5e42eac0dc063bfef0786314747acea1249eea', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-29 06:40:44.675244 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cae9b84e59650eb3d375745c88e0dfdac7e1d75dbf4a6b40cab3a7b0cccd8182', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.675249 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8bcbd9223f863f073b74f6e31de96bae33af63c44d5a320711aa5d1498a40de2', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-29 06:40:44.675253 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1c86b73d69f241b09b67d6a69c6359f32689f73e0e6fba0bc1f524e2e8f8a725', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-29 06:40:44.675257 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d6fee72dd5ef044e1793e8208ba7b0892fda8e4455cb24944f68893978833b3', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675261 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d47e045a0c320806de3eb98f142fb3c0b48cdb5408eba15bba2051207c8d52e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675265 | orchestrator | skipping: [testbed-node-5] => (item={'id': '223ebd1f28876fc782462a192ac0408b6ad6fa69849439e0716ae3d103025225', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-09-29 06:40:44.675269 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c03ffd98e813094ef8a46000dda45121d7e4b18f9c7b6970bdee8d1498b451f4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-29 06:40:44.675276 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7508fcab33e28fe1694c8f03c505ea09014337d7318b0f7e23365ce161d6f8f6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-29 06:40:44.675280 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b08ba4f409f9c1d1883272db21f293db32ac090ff26c55ae87cd1e251ef2aa27', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-09-29 06:40:44.675290 | orchestrator | ok: [testbed-node-5] => (item={'id': '4b3c4480fe996f868da5d85ed7a7c5d3a1f97d4f24a2d9c66f773932e62b0551', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:51.689567 | orchestrator | ok: [testbed-node-5] => (item={'id': '62eb7e7b36bd26bd23134251b98db114bf6be30b0cb12ef5738e7a7baf57e21d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-09-29 06:40:51.689671 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5c5b3a3b13a85962bcfcc4612059650dfe6303461795d70a92a4d5eb949e1d33', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-29 06:40:51.689688 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e03cc153a6c718c93f22bd96673162ad48d2a77b00c89c2844fc9ffd3a446291', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-29 06:40:51.689723 | orchestrator | skipping: [testbed-node-5] => (item={'id': '78078c5529f4a2dfa01dc38f2d84cff6374dd6aac7c2e5700917b440d6b6b8c2', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-09-29 06:40:51.689735 | orchestrator | skipping: [testbed-node-5] => (item={'id': '793607e123b62dc378f238d5e02ea897efe57d602d35ce4bf7dfb51714794fe8', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-29 06:40:51.689745 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ce6b98997c519a312cec7da653f86857c012daa6aed1e65280919ea0d5b7679b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:51.689755 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c4d413a9bc549ea99e30b65b64eb79f528dbd8f9032c372409cf1cd58151a897', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-29 06:40:51.689765 | orchestrator | 2025-09-29 06:40:51.689777 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-29 06:40:51.689788 | orchestrator | Monday 29 September 2025 06:40:44 +0000 (0:00:00.447) 0:00:04.181 ****** 2025-09-29 06:40:51.689798 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.689809 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.689819 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.689828 | orchestrator | 2025-09-29 06:40:51.689838 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-29 06:40:51.689848 | orchestrator | Monday 29 September 2025 06:40:44 +0000 (0:00:00.257) 0:00:04.438 ****** 2025-09-29 06:40:51.689857 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.689867 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:40:51.689877 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:40:51.689886 | orchestrator | 2025-09-29 06:40:51.689896 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-29 06:40:51.689905 | orchestrator | Monday 29 September 2025 06:40:45 +0000 (0:00:00.247) 0:00:04.686 ****** 2025-09-29 06:40:51.689915 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.689925 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.689935 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.689944 | orchestrator | 2025-09-29 06:40:51.689954 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:40:51.689964 | orchestrator | Monday 29 September 2025 06:40:45 +0000 (0:00:00.364) 0:00:05.051 ****** 2025-09-29 06:40:51.689973 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.689983 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.689992 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.690002 | orchestrator | 2025-09-29 06:40:51.690153 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-29 06:40:51.690170 | orchestrator | Monday 29 September 2025 06:40:45 +0000 (0:00:00.272) 0:00:05.324 ****** 2025-09-29 06:40:51.690183 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-29 06:40:51.690195 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-29 06:40:51.690206 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.690218 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-29 06:40:51.690229 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-29 06:40:51.690254 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:40:51.691083 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-29 06:40:51.691113 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-29 06:40:51.691146 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:40:51.691156 | orchestrator | 2025-09-29 06:40:51.691166 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-29 06:40:51.691176 | orchestrator | Monday 29 September 2025 06:40:46 +0000 (0:00:00.286) 0:00:05.611 ****** 2025-09-29 06:40:51.691186 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691195 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.691205 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.691214 | orchestrator | 2025-09-29 06:40:51.691250 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-29 06:40:51.691265 | orchestrator | Monday 29 September 2025 06:40:46 +0000 (0:00:00.261) 0:00:05.873 ****** 2025-09-29 06:40:51.691281 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691298 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:40:51.691315 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:40:51.691331 | orchestrator | 2025-09-29 06:40:51.691348 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-29 06:40:51.691360 | orchestrator | Monday 29 September 2025 06:40:46 +0000 (0:00:00.356) 0:00:06.229 ****** 2025-09-29 06:40:51.691370 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691379 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:40:51.691388 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:40:51.691398 | orchestrator | 2025-09-29 06:40:51.691407 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-29 06:40:51.691417 | orchestrator | Monday 29 September 2025 06:40:46 +0000 (0:00:00.266) 0:00:06.496 ****** 2025-09-29 06:40:51.691427 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691436 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.691445 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.691455 | orchestrator | 2025-09-29 06:40:51.691465 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:40:51.691474 | orchestrator | Monday 29 September 2025 06:40:47 +0000 (0:00:00.304) 0:00:06.800 ****** 2025-09-29 06:40:51.691484 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691493 | orchestrator | 2025-09-29 06:40:51.691503 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:40:51.691512 | orchestrator | Monday 29 September 2025 06:40:47 +0000 (0:00:00.238) 0:00:07.039 ****** 2025-09-29 06:40:51.691522 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691531 | orchestrator | 2025-09-29 06:40:51.691541 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:40:51.691550 | orchestrator | Monday 29 September 2025 06:40:47 +0000 (0:00:00.209) 0:00:07.248 ****** 2025-09-29 06:40:51.691560 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691569 | orchestrator | 2025-09-29 06:40:51.691579 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:51.691588 | orchestrator | Monday 29 September 2025 06:40:47 +0000 (0:00:00.217) 0:00:07.466 ****** 2025-09-29 06:40:51.691597 | orchestrator | 2025-09-29 06:40:51.691607 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:51.691616 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.061) 0:00:07.527 ****** 2025-09-29 06:40:51.691626 | orchestrator | 2025-09-29 06:40:51.691635 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:40:51.691645 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.168) 0:00:07.696 ****** 2025-09-29 06:40:51.691654 | orchestrator | 2025-09-29 06:40:51.691663 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:40:51.691673 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.062) 0:00:07.758 ****** 2025-09-29 06:40:51.691682 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691692 | orchestrator | 2025-09-29 06:40:51.691701 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-29 06:40:51.691711 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.219) 0:00:07.978 ****** 2025-09-29 06:40:51.691728 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691737 | orchestrator | 2025-09-29 06:40:51.691747 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:40:51.691756 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.221) 0:00:08.199 ****** 2025-09-29 06:40:51.691766 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691775 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.691785 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.691794 | orchestrator | 2025-09-29 06:40:51.691804 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-29 06:40:51.691813 | orchestrator | Monday 29 September 2025 06:40:48 +0000 (0:00:00.250) 0:00:08.450 ****** 2025-09-29 06:40:51.691822 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691832 | orchestrator | 2025-09-29 06:40:51.691842 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-29 06:40:51.691851 | orchestrator | Monday 29 September 2025 06:40:49 +0000 (0:00:00.245) 0:00:08.696 ****** 2025-09-29 06:40:51.691861 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-29 06:40:51.691870 | orchestrator | 2025-09-29 06:40:51.691880 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-29 06:40:51.691889 | orchestrator | Monday 29 September 2025 06:40:50 +0000 (0:00:01.540) 0:00:10.236 ****** 2025-09-29 06:40:51.691899 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691908 | orchestrator | 2025-09-29 06:40:51.691918 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-29 06:40:51.691927 | orchestrator | Monday 29 September 2025 06:40:50 +0000 (0:00:00.116) 0:00:10.353 ****** 2025-09-29 06:40:51.691937 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.691946 | orchestrator | 2025-09-29 06:40:51.691955 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-29 06:40:51.691965 | orchestrator | Monday 29 September 2025 06:40:51 +0000 (0:00:00.251) 0:00:10.604 ****** 2025-09-29 06:40:51.691975 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:40:51.691984 | orchestrator | 2025-09-29 06:40:51.691994 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-29 06:40:51.692003 | orchestrator | Monday 29 September 2025 06:40:51 +0000 (0:00:00.108) 0:00:10.712 ****** 2025-09-29 06:40:51.692013 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.692022 | orchestrator | 2025-09-29 06:40:51.692037 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:40:51.692078 | orchestrator | Monday 29 September 2025 06:40:51 +0000 (0:00:00.228) 0:00:10.940 ****** 2025-09-29 06:40:51.692094 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:40:51.692110 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:40:51.692127 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:40:51.692143 | orchestrator | 2025-09-29 06:40:51.692159 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-29 06:40:51.692183 | orchestrator | Monday 29 September 2025 06:40:51 +0000 (0:00:00.261) 0:00:11.202 ****** 2025-09-29 06:41:02.367330 | orchestrator | changed: [testbed-node-3] 2025-09-29 06:41:02.367467 | orchestrator | changed: [testbed-node-4] 2025-09-29 06:41:02.367491 | orchestrator | changed: [testbed-node-5] 2025-09-29 06:41:02.367509 | orchestrator | 2025-09-29 06:41:02.367528 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-29 06:41:02.367548 | orchestrator | Monday 29 September 2025 06:40:54 +0000 (0:00:02.528) 0:00:13.730 ****** 2025-09-29 06:41:02.367567 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.367588 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.367609 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.367628 | orchestrator | 2025-09-29 06:41:02.367649 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-29 06:41:02.367662 | orchestrator | Monday 29 September 2025 06:40:54 +0000 (0:00:00.283) 0:00:14.013 ****** 2025-09-29 06:41:02.367673 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.367716 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.367744 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.367766 | orchestrator | 2025-09-29 06:41:02.367784 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-29 06:41:02.367802 | orchestrator | Monday 29 September 2025 06:40:55 +0000 (0:00:00.541) 0:00:14.555 ****** 2025-09-29 06:41:02.367841 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:41:02.367860 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:41:02.367878 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:41:02.367897 | orchestrator | 2025-09-29 06:41:02.367918 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-29 06:41:02.367938 | orchestrator | Monday 29 September 2025 06:40:55 +0000 (0:00:00.279) 0:00:14.834 ****** 2025-09-29 06:41:02.367958 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.367976 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.367989 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.368002 | orchestrator | 2025-09-29 06:41:02.368022 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-29 06:41:02.368041 | orchestrator | Monday 29 September 2025 06:40:55 +0000 (0:00:00.286) 0:00:15.121 ****** 2025-09-29 06:41:02.368114 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:41:02.368128 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:41:02.368141 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:41:02.368153 | orchestrator | 2025-09-29 06:41:02.368164 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-29 06:41:02.368175 | orchestrator | Monday 29 September 2025 06:40:55 +0000 (0:00:00.241) 0:00:15.363 ****** 2025-09-29 06:41:02.368186 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:41:02.368197 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:41:02.368208 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:41:02.368218 | orchestrator | 2025-09-29 06:41:02.368229 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-29 06:41:02.368240 | orchestrator | Monday 29 September 2025 06:40:56 +0000 (0:00:00.248) 0:00:15.611 ****** 2025-09-29 06:41:02.368251 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.368262 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.368273 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.368284 | orchestrator | 2025-09-29 06:41:02.368295 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-29 06:41:02.368306 | orchestrator | Monday 29 September 2025 06:40:56 +0000 (0:00:00.615) 0:00:16.227 ****** 2025-09-29 06:41:02.368317 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.368327 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.368338 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.368349 | orchestrator | 2025-09-29 06:41:02.368359 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-29 06:41:02.368370 | orchestrator | Monday 29 September 2025 06:40:57 +0000 (0:00:00.417) 0:00:16.644 ****** 2025-09-29 06:41:02.368381 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.368392 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.368403 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.368414 | orchestrator | 2025-09-29 06:41:02.368425 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-29 06:41:02.368436 | orchestrator | Monday 29 September 2025 06:40:57 +0000 (0:00:00.255) 0:00:16.899 ****** 2025-09-29 06:41:02.368447 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:41:02.368458 | orchestrator | skipping: [testbed-node-4] 2025-09-29 06:41:02.368469 | orchestrator | skipping: [testbed-node-5] 2025-09-29 06:41:02.368479 | orchestrator | 2025-09-29 06:41:02.368490 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-29 06:41:02.368501 | orchestrator | Monday 29 September 2025 06:40:57 +0000 (0:00:00.254) 0:00:17.153 ****** 2025-09-29 06:41:02.368512 | orchestrator | ok: [testbed-node-3] 2025-09-29 06:41:02.368522 | orchestrator | ok: [testbed-node-4] 2025-09-29 06:41:02.368533 | orchestrator | ok: [testbed-node-5] 2025-09-29 06:41:02.368558 | orchestrator | 2025-09-29 06:41:02.368569 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-29 06:41:02.368580 | orchestrator | Monday 29 September 2025 06:40:58 +0000 (0:00:00.390) 0:00:17.544 ****** 2025-09-29 06:41:02.368590 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:41:02.368601 | orchestrator | 2025-09-29 06:41:02.368612 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-29 06:41:02.368623 | orchestrator | Monday 29 September 2025 06:40:58 +0000 (0:00:00.222) 0:00:17.766 ****** 2025-09-29 06:41:02.368634 | orchestrator | skipping: [testbed-node-3] 2025-09-29 06:41:02.368645 | orchestrator | 2025-09-29 06:41:02.368662 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-29 06:41:02.368674 | orchestrator | Monday 29 September 2025 06:40:58 +0000 (0:00:00.216) 0:00:17.983 ****** 2025-09-29 06:41:02.368685 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:41:02.368695 | orchestrator | 2025-09-29 06:41:02.368706 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-29 06:41:02.368717 | orchestrator | Monday 29 September 2025 06:40:59 +0000 (0:00:01.226) 0:00:19.209 ****** 2025-09-29 06:41:02.368728 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:41:02.368739 | orchestrator | 2025-09-29 06:41:02.368750 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-29 06:41:02.368761 | orchestrator | Monday 29 September 2025 06:40:59 +0000 (0:00:00.232) 0:00:19.441 ****** 2025-09-29 06:41:02.368794 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:41:02.368806 | orchestrator | 2025-09-29 06:41:02.368817 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:41:02.368828 | orchestrator | Monday 29 September 2025 06:41:00 +0000 (0:00:00.215) 0:00:19.657 ****** 2025-09-29 06:41:02.368838 | orchestrator | 2025-09-29 06:41:02.368849 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:41:02.368860 | orchestrator | Monday 29 September 2025 06:41:00 +0000 (0:00:00.071) 0:00:19.728 ****** 2025-09-29 06:41:02.368870 | orchestrator | 2025-09-29 06:41:02.368881 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-29 06:41:02.368892 | orchestrator | Monday 29 September 2025 06:41:00 +0000 (0:00:00.071) 0:00:19.800 ****** 2025-09-29 06:41:02.368903 | orchestrator | 2025-09-29 06:41:02.368913 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-29 06:41:02.368938 | orchestrator | Monday 29 September 2025 06:41:00 +0000 (0:00:00.066) 0:00:19.866 ****** 2025-09-29 06:41:02.368950 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-29 06:41:02.368971 | orchestrator | 2025-09-29 06:41:02.368982 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-29 06:41:02.368993 | orchestrator | Monday 29 September 2025 06:41:01 +0000 (0:00:01.234) 0:00:21.101 ****** 2025-09-29 06:41:02.369004 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-29 06:41:02.369015 | orchestrator |  "msg": [ 2025-09-29 06:41:02.369026 | orchestrator |  "Validator run completed.", 2025-09-29 06:41:02.369038 | orchestrator |  "You can find the report file here:", 2025-09-29 06:41:02.369070 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-29T06:40:41+00:00-report.json", 2025-09-29 06:41:02.369083 | orchestrator |  "on the following host:", 2025-09-29 06:41:02.369094 | orchestrator |  "testbed-manager" 2025-09-29 06:41:02.369105 | orchestrator |  ] 2025-09-29 06:41:02.369117 | orchestrator | } 2025-09-29 06:41:02.369128 | orchestrator | 2025-09-29 06:41:02.369139 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:41:02.369151 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-29 06:41:02.369163 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-29 06:41:02.369182 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-29 06:41:02.369193 | orchestrator | 2025-09-29 06:41:02.369204 | orchestrator | 2025-09-29 06:41:02.369215 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:41:02.369226 | orchestrator | Monday 29 September 2025 06:41:02 +0000 (0:00:00.601) 0:00:21.703 ****** 2025-09-29 06:41:02.369236 | orchestrator | =============================================================================== 2025-09-29 06:41:02.369247 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.53s 2025-09-29 06:41:02.369258 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.54s 2025-09-29 06:41:02.369269 | orchestrator | Write report file ------------------------------------------------------- 1.23s 2025-09-29 06:41:02.369280 | orchestrator | Aggregate test results step one ----------------------------------------- 1.23s 2025-09-29 06:41:02.369290 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2025-09-29 06:41:02.369301 | orchestrator | Prepare test data ------------------------------------------------------- 0.62s 2025-09-29 06:41:02.369312 | orchestrator | Print report file information ------------------------------------------- 0.60s 2025-09-29 06:41:02.369323 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-09-29 06:41:02.369333 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2025-09-29 06:41:02.369344 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.45s 2025-09-29 06:41:02.369355 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.42s 2025-09-29 06:41:02.369365 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.41s 2025-09-29 06:41:02.369376 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.39s 2025-09-29 06:41:02.369387 | orchestrator | Prepare test data ------------------------------------------------------- 0.37s 2025-09-29 06:41:02.369398 | orchestrator | Set test result to passed if count matches ------------------------------ 0.36s 2025-09-29 06:41:02.369408 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.36s 2025-09-29 06:41:02.369425 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.31s 2025-09-29 06:41:02.369436 | orchestrator | Set test result to passed if all containers are running ----------------- 0.30s 2025-09-29 06:41:02.369447 | orchestrator | Flush handlers ---------------------------------------------------------- 0.29s 2025-09-29 06:41:02.369458 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.29s 2025-09-29 06:41:02.560834 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-29 06:41:02.570361 | orchestrator | + set -e 2025-09-29 06:41:02.570442 | orchestrator | + source /opt/manager-vars.sh 2025-09-29 06:41:02.570459 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-29 06:41:02.570472 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-29 06:41:02.570483 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-29 06:41:02.570494 | orchestrator | ++ CEPH_VERSION=reef 2025-09-29 06:41:02.570506 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-29 06:41:02.570518 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-29 06:41:02.570529 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:41:02.570540 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:41:02.570552 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-29 06:41:02.570563 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-29 06:41:02.570573 | orchestrator | ++ export ARA=false 2025-09-29 06:41:02.570585 | orchestrator | ++ ARA=false 2025-09-29 06:41:02.570596 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-29 06:41:02.570607 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-29 06:41:02.570617 | orchestrator | ++ export TEMPEST=false 2025-09-29 06:41:02.570628 | orchestrator | ++ TEMPEST=false 2025-09-29 06:41:02.570639 | orchestrator | ++ export IS_ZUUL=true 2025-09-29 06:41:02.570650 | orchestrator | ++ IS_ZUUL=true 2025-09-29 06:41:02.570661 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:41:02.570699 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.20 2025-09-29 06:41:02.570711 | orchestrator | ++ export EXTERNAL_API=false 2025-09-29 06:41:02.570722 | orchestrator | ++ EXTERNAL_API=false 2025-09-29 06:41:02.570733 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-29 06:41:02.570744 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-29 06:41:02.570754 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-29 06:41:02.570765 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-29 06:41:02.570776 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-29 06:41:02.570787 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-29 06:41:02.570798 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-29 06:41:02.570808 | orchestrator | + source /etc/os-release 2025-09-29 06:41:02.570819 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-29 06:41:02.570830 | orchestrator | ++ NAME=Ubuntu 2025-09-29 06:41:02.570841 | orchestrator | ++ VERSION_ID=24.04 2025-09-29 06:41:02.570852 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-29 06:41:02.570862 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-29 06:41:02.570873 | orchestrator | ++ ID=ubuntu 2025-09-29 06:41:02.570884 | orchestrator | ++ ID_LIKE=debian 2025-09-29 06:41:02.570895 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-29 06:41:02.570906 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-29 06:41:02.570917 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-29 06:41:02.570928 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-29 06:41:02.570939 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-29 06:41:02.570950 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-29 06:41:02.570961 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-29 06:41:02.570973 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-29 06:41:02.570986 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-29 06:41:02.603591 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-29 06:41:25.169132 | orchestrator | 2025-09-29 06:41:25.169265 | orchestrator | # Status of Elasticsearch 2025-09-29 06:41:25.169293 | orchestrator | 2025-09-29 06:41:25.169313 | orchestrator | + pushd /opt/configuration/contrib 2025-09-29 06:41:25.169335 | orchestrator | + echo 2025-09-29 06:41:25.169354 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-29 06:41:25.169372 | orchestrator | + echo 2025-09-29 06:41:25.169390 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-29 06:41:25.338791 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-29 06:41:25.338889 | orchestrator | 2025-09-29 06:41:25.338913 | orchestrator | # Status of MariaDB 2025-09-29 06:41:25.338933 | orchestrator | 2025-09-29 06:41:25.338953 | orchestrator | + echo 2025-09-29 06:41:25.338973 | orchestrator | + echo '# Status of MariaDB' 2025-09-29 06:41:25.338994 | orchestrator | + echo 2025-09-29 06:41:25.339014 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-29 06:41:25.339035 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-29 06:41:25.386645 | orchestrator | Reading package lists... 2025-09-29 06:41:25.629176 | orchestrator | Building dependency tree... 2025-09-29 06:41:25.629407 | orchestrator | Reading state information... 2025-09-29 06:41:25.902671 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-29 06:41:25.902772 | orchestrator | bc set to manually installed. 2025-09-29 06:41:25.902788 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-29 06:41:26.552495 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-29 06:41:26.553475 | orchestrator | 2025-09-29 06:41:26.553515 | orchestrator | # Status of Prometheus 2025-09-29 06:41:26.553529 | orchestrator | 2025-09-29 06:41:26.553541 | orchestrator | + echo 2025-09-29 06:41:26.553552 | orchestrator | + echo '# Status of Prometheus' 2025-09-29 06:41:26.553563 | orchestrator | + echo 2025-09-29 06:41:26.553575 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-29 06:41:26.607101 | orchestrator |

503 Service Unavailable

2025-09-29 06:41:26.607207 | orchestrator | No server is available to handle this request. 2025-09-29 06:41:26.607266 | orchestrator | 2025-09-29 06:41:26.609273 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-29 06:41:26.673838 | orchestrator |

503 Service Unavailable

2025-09-29 06:41:26.673938 | orchestrator | No server is available to handle this request. 2025-09-29 06:41:26.673952 | orchestrator | 2025-09-29 06:41:26.675761 | orchestrator | 2025-09-29 06:41:26.675787 | orchestrator | # Status of RabbitMQ 2025-09-29 06:41:26.675793 | orchestrator | 2025-09-29 06:41:26.675799 | orchestrator | + echo 2025-09-29 06:41:26.675804 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-29 06:41:26.675809 | orchestrator | + echo 2025-09-29 06:41:26.675815 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-29 06:41:27.093723 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-29 06:41:27.100632 | orchestrator | 2025-09-29 06:41:27.100717 | orchestrator | # Status of Redis 2025-09-29 06:41:27.100732 | orchestrator | 2025-09-29 06:41:27.100745 | orchestrator | + echo 2025-09-29 06:41:27.100758 | orchestrator | + echo '# Status of Redis' 2025-09-29 06:41:27.100770 | orchestrator | + echo 2025-09-29 06:41:27.100783 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-29 06:41:27.106243 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.003165s;;;0.000000;10.000000 2025-09-29 06:41:27.106304 | orchestrator | + popd 2025-09-29 06:41:27.106314 | orchestrator | + echo 2025-09-29 06:41:27.106329 | orchestrator | 2025-09-29 06:41:27.106338 | orchestrator | # Create backup of MariaDB database 2025-09-29 06:41:27.106346 | orchestrator | 2025-09-29 06:41:27.106353 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-29 06:41:27.106360 | orchestrator | + echo 2025-09-29 06:41:27.106367 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-29 06:41:28.961888 | orchestrator | 2025-09-29 06:41:28 | INFO  | Task b01df658-b310-48f6-aa85-bf50b1f6c296 (mariadb_backup) was prepared for execution. 2025-09-29 06:41:28.962107 | orchestrator | 2025-09-29 06:41:28 | INFO  | It takes a moment until task b01df658-b310-48f6-aa85-bf50b1f6c296 (mariadb_backup) has been started and output is visible here. 2025-09-29 06:41:54.772369 | orchestrator | 2025-09-29 06:41:54.772471 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-29 06:41:54.772481 | orchestrator | 2025-09-29 06:41:54.772485 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-29 06:41:54.772490 | orchestrator | Monday 29 September 2025 06:41:32 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-09-29 06:41:54.772494 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:41:54.772499 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:41:54.772504 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:41:54.772508 | orchestrator | 2025-09-29 06:41:54.772512 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-29 06:41:54.772516 | orchestrator | Monday 29 September 2025 06:41:33 +0000 (0:00:00.278) 0:00:00.456 ****** 2025-09-29 06:41:54.772520 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-29 06:41:54.772524 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-29 06:41:54.772528 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-29 06:41:54.772542 | orchestrator | 2025-09-29 06:41:54.772548 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-29 06:41:54.772554 | orchestrator | 2025-09-29 06:41:54.772560 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-29 06:41:54.772569 | orchestrator | Monday 29 September 2025 06:41:33 +0000 (0:00:00.472) 0:00:00.928 ****** 2025-09-29 06:41:54.772578 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-29 06:41:54.772585 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-29 06:41:54.772591 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-29 06:41:54.772597 | orchestrator | 2025-09-29 06:41:54.772604 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-29 06:41:54.772630 | orchestrator | Monday 29 September 2025 06:41:33 +0000 (0:00:00.344) 0:00:01.273 ****** 2025-09-29 06:41:54.772637 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-29 06:41:54.772642 | orchestrator | 2025-09-29 06:41:54.772646 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-29 06:41:54.772650 | orchestrator | Monday 29 September 2025 06:41:34 +0000 (0:00:00.460) 0:00:01.733 ****** 2025-09-29 06:41:54.772653 | orchestrator | ok: [testbed-node-1] 2025-09-29 06:41:54.772657 | orchestrator | ok: [testbed-node-0] 2025-09-29 06:41:54.772661 | orchestrator | ok: [testbed-node-2] 2025-09-29 06:41:54.772665 | orchestrator | 2025-09-29 06:41:54.772669 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-29 06:41:54.772672 | orchestrator | Monday 29 September 2025 06:41:37 +0000 (0:00:02.696) 0:00:04.430 ****** 2025-09-29 06:41:54.772676 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-29 06:41:54.772680 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-29 06:41:54.772684 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-29 06:41:54.772688 | orchestrator | mariadb_bootstrap_restart 2025-09-29 06:41:54.772692 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:41:54.772696 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:41:54.772700 | orchestrator | changed: [testbed-node-0] 2025-09-29 06:41:54.772703 | orchestrator | 2025-09-29 06:41:54.772707 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-29 06:41:54.772711 | orchestrator | skipping: no hosts matched 2025-09-29 06:41:54.772714 | orchestrator | 2025-09-29 06:41:54.772718 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-29 06:41:54.772722 | orchestrator | skipping: no hosts matched 2025-09-29 06:41:54.772725 | orchestrator | 2025-09-29 06:41:54.772729 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-29 06:41:54.772733 | orchestrator | skipping: no hosts matched 2025-09-29 06:41:54.772737 | orchestrator | 2025-09-29 06:41:54.772741 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-29 06:41:54.772745 | orchestrator | 2025-09-29 06:41:54.772748 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-29 06:41:54.772752 | orchestrator | Monday 29 September 2025 06:41:54 +0000 (0:00:17.038) 0:00:21.469 ****** 2025-09-29 06:41:54.772756 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:41:54.772759 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:41:54.772763 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:41:54.772767 | orchestrator | 2025-09-29 06:41:54.772770 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-29 06:41:54.772774 | orchestrator | Monday 29 September 2025 06:41:54 +0000 (0:00:00.270) 0:00:21.740 ****** 2025-09-29 06:41:54.772778 | orchestrator | skipping: [testbed-node-0] 2025-09-29 06:41:54.772782 | orchestrator | skipping: [testbed-node-1] 2025-09-29 06:41:54.772785 | orchestrator | skipping: [testbed-node-2] 2025-09-29 06:41:54.772789 | orchestrator | 2025-09-29 06:41:54.772793 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:41:54.772829 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-29 06:41:54.772835 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:41:54.772842 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-29 06:41:54.772846 | orchestrator | 2025-09-29 06:41:54.772850 | orchestrator | 2025-09-29 06:41:54.772854 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:41:54.772862 | orchestrator | Monday 29 September 2025 06:41:54 +0000 (0:00:00.191) 0:00:21.931 ****** 2025-09-29 06:41:54.772865 | orchestrator | =============================================================================== 2025-09-29 06:41:54.772869 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.04s 2025-09-29 06:41:54.772885 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.70s 2025-09-29 06:41:54.772889 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2025-09-29 06:41:54.772893 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.46s 2025-09-29 06:41:54.772896 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.34s 2025-09-29 06:41:54.772900 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-09-29 06:41:54.772904 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.27s 2025-09-29 06:41:54.772908 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.19s 2025-09-29 06:41:54.959488 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-29 06:41:54.968220 | orchestrator | + set -e 2025-09-29 06:41:54.968312 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-29 06:41:54.968352 | orchestrator | ++ export INTERACTIVE=false 2025-09-29 06:41:54.969449 | orchestrator | ++ INTERACTIVE=false 2025-09-29 06:41:54.969532 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-29 06:41:54.969548 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-29 06:41:54.969562 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-29 06:41:54.969943 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-29 06:41:54.976540 | orchestrator | 2025-09-29 06:41:54.976594 | orchestrator | # OpenStack endpoints 2025-09-29 06:41:54.976607 | orchestrator | 2025-09-29 06:41:54.976620 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-29 06:41:54.976632 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-29 06:41:54.976643 | orchestrator | + export OS_CLOUD=admin 2025-09-29 06:41:54.976654 | orchestrator | + OS_CLOUD=admin 2025-09-29 06:41:54.976665 | orchestrator | + echo 2025-09-29 06:41:54.976678 | orchestrator | + echo '# OpenStack endpoints' 2025-09-29 06:41:54.976689 | orchestrator | + echo 2025-09-29 06:41:54.976701 | orchestrator | + openstack endpoint list 2025-09-29 06:41:58.300880 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-29 06:41:58.301000 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-29 06:41:58.301017 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-29 06:41:58.301030 | orchestrator | | 1362b3d0c3954ec09e779097b36ec31e | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-29 06:41:58.301067 | orchestrator | | 152bf6198d89470fa73b339fb8fc7bf5 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-29 06:41:58.301081 | orchestrator | | 238b9fbbf42841d58603bc543ea86f35 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-29 06:41:58.301139 | orchestrator | | 4ad81a8887cf40179d00784117b7b13b | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-29 06:41:58.301157 | orchestrator | | 5bfd93da6c934621a434b389f2570902 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-29 06:41:58.301169 | orchestrator | | 6b0f16617b9f46e6b285d381e314fec8 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-29 06:41:58.301207 | orchestrator | | 7499696e563a4a6387b66d5d338fb28b | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-29 06:41:58.301219 | orchestrator | | 7a04198683584e1ebe03740b9f74d5d3 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-29 06:41:58.301230 | orchestrator | | 94e77c4ceef04f7e8264fb9959231061 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-29 06:41:58.301241 | orchestrator | | 9ea826b54af0455aa16877fcf19475d2 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-29 06:41:58.301252 | orchestrator | | b00542c225ab48f6bf41cbcd2a229958 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-29 06:41:58.301280 | orchestrator | | c7be6241bf044381850a6ad5a7ae5c3f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-29 06:41:58.301292 | orchestrator | | cebde4d70633465b925484b5857db84e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-29 06:41:58.301304 | orchestrator | | d322eeed006f452fa8d3e38de5a2d7e4 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-29 06:41:58.301315 | orchestrator | | de68c1053fa1430484d11281560f647d | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-29 06:41:58.301327 | orchestrator | | e22e397f1ec24b5dbb4038ce3af51f86 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-29 06:41:58.301338 | orchestrator | | e2979e328ac844bcb653d669212ac568 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-29 06:41:58.301350 | orchestrator | | e3fa8e72461c4831a7a365571aa27286 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-29 06:41:58.301361 | orchestrator | | e72553a5e8f34c60baf7ab0e850fce8e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-29 06:41:58.301373 | orchestrator | | e734682e92fe4fdc88b386b4caf6af04 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-29 06:41:58.301403 | orchestrator | | e8a56d72bd104dc0bf7ddc8dc871907b | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-29 06:41:58.301415 | orchestrator | | fe772fc00a564beeade7b12b6c42b59f | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-29 06:41:58.301427 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-29 06:41:58.449551 | orchestrator | 2025-09-29 06:41:58.449672 | orchestrator | # Cinder 2025-09-29 06:41:58.449699 | orchestrator | 2025-09-29 06:41:58.449719 | orchestrator | + echo 2025-09-29 06:41:58.449739 | orchestrator | + echo '# Cinder' 2025-09-29 06:41:58.449760 | orchestrator | + echo 2025-09-29 06:41:58.449780 | orchestrator | + openstack volume service list 2025-09-29 06:42:00.825720 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:00.825826 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-29 06:42:00.825851 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:00.825910 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-29T06:41:57.000000 | 2025-09-29 06:42:00.825933 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-29T06:41:58.000000 | 2025-09-29 06:42:00.825955 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-29T06:41:57.000000 | 2025-09-29 06:42:00.825967 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-29T06:41:57.000000 | 2025-09-29 06:42:00.825978 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-29T06:41:57.000000 | 2025-09-29 06:42:00.825989 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-29T06:41:57.000000 | 2025-09-29 06:42:00.825999 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-29T06:41:55.000000 | 2025-09-29 06:42:00.826014 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-29T06:41:56.000000 | 2025-09-29 06:42:00.826154 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-29T06:41:56.000000 | 2025-09-29 06:42:00.826166 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:00.979529 | orchestrator | 2025-09-29 06:42:00.979627 | orchestrator | # Neutron 2025-09-29 06:42:00.979643 | orchestrator | 2025-09-29 06:42:00.979655 | orchestrator | + echo 2025-09-29 06:42:00.979666 | orchestrator | + echo '# Neutron' 2025-09-29 06:42:00.979678 | orchestrator | + echo 2025-09-29 06:42:00.979689 | orchestrator | + openstack network agent list 2025-09-29 06:42:03.410929 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-29 06:42:03.411097 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-29 06:42:03.411116 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-29 06:42:03.411148 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411876 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411910 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411923 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411934 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411945 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-29 06:42:03.411956 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-29 06:42:03.411967 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-29 06:42:03.411978 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-29 06:42:03.411988 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-29 06:42:03.582287 | orchestrator | + openstack network service provider list 2025-09-29 06:42:06.084360 | orchestrator | +---------------+------+---------+ 2025-09-29 06:42:06.084431 | orchestrator | | Service Type | Name | Default | 2025-09-29 06:42:06.084437 | orchestrator | +---------------+------+---------+ 2025-09-29 06:42:06.084442 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-29 06:42:06.084446 | orchestrator | +---------------+------+---------+ 2025-09-29 06:42:06.356955 | orchestrator | 2025-09-29 06:42:06.357101 | orchestrator | # Nova 2025-09-29 06:42:06.357128 | orchestrator | 2025-09-29 06:42:06.357140 | orchestrator | + echo 2025-09-29 06:42:06.357152 | orchestrator | + echo '# Nova' 2025-09-29 06:42:06.357164 | orchestrator | + echo 2025-09-29 06:42:06.357175 | orchestrator | + openstack compute service list 2025-09-29 06:42:09.494401 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:09.494529 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-29 06:42:09.494548 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:09.494561 | orchestrator | | 1c44f548-0019-4c1f-ad2e-856b05f53540 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-29T06:42:08.000000 | 2025-09-29 06:42:09.494572 | orchestrator | | d8bea57b-d089-466d-8e6d-0b5071c697a3 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-29T06:42:02.000000 | 2025-09-29 06:42:09.494584 | orchestrator | | 988ac612-8a40-4676-9cf2-f99f2ca3754d | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-29T06:42:03.000000 | 2025-09-29 06:42:09.494595 | orchestrator | | 97edf4ab-36f8-4a5c-b1fd-8274f884676a | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-29T06:42:08.000000 | 2025-09-29 06:42:09.494606 | orchestrator | | ca18fdf7-c956-477e-bae3-902ea529fbe2 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-29T06:41:59.000000 | 2025-09-29 06:42:09.494617 | orchestrator | | d4c8396f-6675-4532-8452-f02f2b4791c8 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-29T06:42:00.000000 | 2025-09-29 06:42:09.494628 | orchestrator | | c66835c4-f08b-4296-a173-fe59cdb8fa94 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-29T06:42:01.000000 | 2025-09-29 06:42:09.494639 | orchestrator | | 51295438-a12f-4bcc-a2cc-a95160fa3383 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-29T06:42:02.000000 | 2025-09-29 06:42:09.494649 | orchestrator | | 3c708680-bdc5-4d3a-a7f0-95afe9c54b72 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-29T06:42:02.000000 | 2025-09-29 06:42:09.494660 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-29 06:42:09.746441 | orchestrator | + openstack hypervisor list 2025-09-29 06:42:12.423370 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-29 06:42:12.423474 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-29 06:42:12.423488 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-29 06:42:12.423500 | orchestrator | | 9ba00efb-eb5b-4092-ad5a-a443d6a66942 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-29 06:42:12.423511 | orchestrator | | 4fd45054-1a14-42fd-8a32-c56e3a81316e | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-29 06:42:12.423522 | orchestrator | | 6e47d8f5-822d-4d8c-aa35-f0de8f80b5a2 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-29 06:42:12.423533 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-29 06:42:12.689906 | orchestrator | 2025-09-29 06:42:12.690004 | orchestrator | # Run OpenStack test play 2025-09-29 06:42:12.690123 | orchestrator | 2025-09-29 06:42:12.690138 | orchestrator | + echo 2025-09-29 06:42:12.690151 | orchestrator | + echo '# Run OpenStack test play' 2025-09-29 06:42:12.690164 | orchestrator | + echo 2025-09-29 06:42:12.690175 | orchestrator | + osism apply --environment openstack test 2025-09-29 06:42:14.533168 | orchestrator | 2025-09-29 06:42:14 | INFO  | Trying to run play test in environment openstack 2025-09-29 06:42:14.595523 | orchestrator | 2025-09-29 06:42:14 | INFO  | Task bb09e4d8-be0e-4c5f-a59c-541e4fb12d3d (test) was prepared for execution. 2025-09-29 06:42:14.595620 | orchestrator | 2025-09-29 06:42:14 | INFO  | It takes a moment until task bb09e4d8-be0e-4c5f-a59c-541e4fb12d3d (test) has been started and output is visible here. 2025-09-29 06:49:05.448598 | orchestrator | 2025-09-29 06:49:05.448705 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-29 06:49:05.448719 | orchestrator | 2025-09-29 06:49:05.448727 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-29 06:49:05.448735 | orchestrator | Monday 29 September 2025 06:42:18 +0000 (0:00:00.078) 0:00:00.078 ****** 2025-09-29 06:49:05.448744 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.448753 | orchestrator | 2025-09-29 06:49:05.448760 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-29 06:49:05.448767 | orchestrator | Monday 29 September 2025 06:42:21 +0000 (0:00:03.303) 0:00:03.381 ****** 2025-09-29 06:49:05.448774 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.448781 | orchestrator | 2025-09-29 06:49:05.448788 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-29 06:49:05.448795 | orchestrator | Monday 29 September 2025 06:42:25 +0000 (0:00:03.910) 0:00:07.292 ****** 2025-09-29 06:49:05.448802 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.448809 | orchestrator | 2025-09-29 06:49:05.448816 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-29 06:49:05.448823 | orchestrator | Monday 29 September 2025 06:42:31 +0000 (0:00:06.280) 0:00:13.573 ****** 2025-09-29 06:49:05.448829 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.448836 | orchestrator | 2025-09-29 06:49:05.448844 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-29 06:49:05.448850 | orchestrator | Monday 29 September 2025 06:42:35 +0000 (0:00:03.832) 0:00:17.406 ****** 2025-09-29 06:49:05.448857 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.448864 | orchestrator | 2025-09-29 06:49:05.448871 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-29 06:49:05.448878 | orchestrator | Monday 29 September 2025 06:42:39 +0000 (0:00:03.881) 0:00:21.287 ****** 2025-09-29 06:49:05.448885 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-29 06:49:05.448893 | orchestrator | changed: [localhost] => (item=member) 2025-09-29 06:49:05.448901 | orchestrator | changed: [localhost] => (item=creator) 2025-09-29 06:49:05.448908 | orchestrator | 2025-09-29 06:49:05.448915 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-29 06:49:05.448921 | orchestrator | Monday 29 September 2025 06:42:51 +0000 (0:00:11.885) 0:00:33.173 ****** 2025-09-29 06:49:05.449016 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449025 | orchestrator | 2025-09-29 06:49:05.449032 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-29 06:49:05.449039 | orchestrator | Monday 29 September 2025 06:42:55 +0000 (0:00:04.597) 0:00:37.771 ****** 2025-09-29 06:49:05.449046 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449053 | orchestrator | 2025-09-29 06:49:05.449060 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-29 06:49:05.449068 | orchestrator | Monday 29 September 2025 06:43:01 +0000 (0:00:05.252) 0:00:43.023 ****** 2025-09-29 06:49:05.449074 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449081 | orchestrator | 2025-09-29 06:49:05.449088 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-29 06:49:05.449096 | orchestrator | Monday 29 September 2025 06:43:05 +0000 (0:00:04.697) 0:00:47.720 ****** 2025-09-29 06:49:05.449103 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449110 | orchestrator | 2025-09-29 06:49:05.449117 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-29 06:49:05.449125 | orchestrator | Monday 29 September 2025 06:43:09 +0000 (0:00:03.753) 0:00:51.474 ****** 2025-09-29 06:49:05.449155 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449165 | orchestrator | 2025-09-29 06:49:05.449174 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-29 06:49:05.449183 | orchestrator | Monday 29 September 2025 06:43:13 +0000 (0:00:03.745) 0:00:55.220 ****** 2025-09-29 06:49:05.449191 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449198 | orchestrator | 2025-09-29 06:49:05.449205 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-29 06:49:05.449213 | orchestrator | Monday 29 September 2025 06:43:16 +0000 (0:00:03.575) 0:00:58.795 ****** 2025-09-29 06:49:05.449220 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449227 | orchestrator | 2025-09-29 06:49:05.449234 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-29 06:49:05.449242 | orchestrator | Monday 29 September 2025 06:43:32 +0000 (0:00:15.402) 0:01:14.198 ****** 2025-09-29 06:49:05.449250 | orchestrator | changed: [localhost] => (item=test) 2025-09-29 06:49:05.449258 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-29 06:49:05.449266 | orchestrator | 2025-09-29 06:49:05.449274 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-29 06:49:05.449283 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-29 06:49:05.449290 | orchestrator | 2025-09-29 06:49:05.449300 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-29 06:49:05.449308 | orchestrator | 2025-09-29 06:49:05.449315 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-29 06:49:05.449350 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-29 06:49:05.449359 | orchestrator | 2025-09-29 06:49:05.449366 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-29 06:49:05.449378 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-29 06:49:05.449385 | orchestrator | 2025-09-29 06:49:05.449394 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-29 06:49:05.449402 | orchestrator | Monday 29 September 2025 06:47:45 +0000 (0:04:13.538) 0:05:27.736 ****** 2025-09-29 06:49:05.449411 | orchestrator | changed: [localhost] => (item=test) 2025-09-29 06:49:05.449421 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-29 06:49:05.449428 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-29 06:49:05.449436 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-29 06:49:05.449445 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-29 06:49:05.449455 | orchestrator | 2025-09-29 06:49:05.449463 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-29 06:49:05.449470 | orchestrator | Monday 29 September 2025 06:48:08 +0000 (0:00:22.863) 0:05:50.600 ****** 2025-09-29 06:49:05.449493 | orchestrator | changed: [localhost] => (item=test) 2025-09-29 06:49:05.449503 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-29 06:49:05.449510 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-29 06:49:05.449517 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-29 06:49:05.449524 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-29 06:49:05.449532 | orchestrator | 2025-09-29 06:49:05.449541 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-29 06:49:05.449549 | orchestrator | Monday 29 September 2025 06:48:40 +0000 (0:00:32.258) 0:06:22.858 ****** 2025-09-29 06:49:05.449556 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449562 | orchestrator | 2025-09-29 06:49:05.449570 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-29 06:49:05.449576 | orchestrator | Monday 29 September 2025 06:48:47 +0000 (0:00:06.111) 0:06:28.970 ****** 2025-09-29 06:49:05.449583 | orchestrator | changed: [localhost] 2025-09-29 06:49:05.449590 | orchestrator | 2025-09-29 06:49:05.449596 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-29 06:49:05.449603 | orchestrator | Monday 29 September 2025 06:49:00 +0000 (0:00:13.400) 0:06:42.370 ****** 2025-09-29 06:49:05.449617 | orchestrator | ok: [localhost] 2025-09-29 06:49:05.449624 | orchestrator | 2025-09-29 06:49:05.449632 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-29 06:49:05.449639 | orchestrator | Monday 29 September 2025 06:49:05 +0000 (0:00:04.729) 0:06:47.100 ****** 2025-09-29 06:49:05.449646 | orchestrator | ok: [localhost] => { 2025-09-29 06:49:05.449652 | orchestrator |  "msg": "192.168.112.154" 2025-09-29 06:49:05.449659 | orchestrator | } 2025-09-29 06:49:05.449667 | orchestrator | 2025-09-29 06:49:05.449674 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-29 06:49:05.449680 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-29 06:49:05.449692 | orchestrator | 2025-09-29 06:49:05.449698 | orchestrator | 2025-09-29 06:49:05.449705 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-29 06:49:05.449712 | orchestrator | Monday 29 September 2025 06:49:05 +0000 (0:00:00.030) 0:06:47.131 ****** 2025-09-29 06:49:05.449719 | orchestrator | =============================================================================== 2025-09-29 06:49:05.449727 | orchestrator | Create test instances ------------------------------------------------- 253.54s 2025-09-29 06:49:05.449733 | orchestrator | Add tag to instances --------------------------------------------------- 32.26s 2025-09-29 06:49:05.449740 | orchestrator | Add metadata to instances ---------------------------------------------- 22.86s 2025-09-29 06:49:05.449747 | orchestrator | Create test network topology ------------------------------------------- 15.40s 2025-09-29 06:49:05.449754 | orchestrator | Attach test volume ----------------------------------------------------- 13.40s 2025-09-29 06:49:05.449760 | orchestrator | Add member roles to user test ------------------------------------------ 11.89s 2025-09-29 06:49:05.449767 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.28s 2025-09-29 06:49:05.449775 | orchestrator | Create test volume ------------------------------------------------------ 6.11s 2025-09-29 06:49:05.449782 | orchestrator | Create ssh security group ----------------------------------------------- 5.25s 2025-09-29 06:49:05.449789 | orchestrator | Create floating ip address ---------------------------------------------- 4.73s 2025-09-29 06:49:05.449796 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.70s 2025-09-29 06:49:05.449803 | orchestrator | Create test server group ------------------------------------------------ 4.60s 2025-09-29 06:49:05.449810 | orchestrator | Create test-admin user -------------------------------------------------- 3.91s 2025-09-29 06:49:05.449818 | orchestrator | Create test user -------------------------------------------------------- 3.88s 2025-09-29 06:49:05.449824 | orchestrator | Create test project ----------------------------------------------------- 3.83s 2025-09-29 06:49:05.449831 | orchestrator | Create icmp security group ---------------------------------------------- 3.75s 2025-09-29 06:49:05.449838 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.75s 2025-09-29 06:49:05.449845 | orchestrator | Create test keypair ----------------------------------------------------- 3.58s 2025-09-29 06:49:05.449851 | orchestrator | Create test domain ------------------------------------------------------ 3.30s 2025-09-29 06:49:05.449858 | orchestrator | Print floating ip address ----------------------------------------------- 0.03s 2025-09-29 06:49:05.638123 | orchestrator | + server_list 2025-09-29 06:49:05.638223 | orchestrator | + openstack --os-cloud test server list 2025-09-29 06:49:09.379424 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-29 06:49:09.379498 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-29 06:49:09.379504 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-29 06:49:09.379508 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | auto_allocated_network=10.42.0.50, 192.168.112.112 | N/A (booted from volume) | SCS-1L-1 | 2025-09-29 06:49:09.379527 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | auto_allocated_network=10.42.0.51, 192.168.112.108 | N/A (booted from volume) | SCS-1L-1 | 2025-09-29 06:49:09.379532 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | auto_allocated_network=10.42.0.23, 192.168.112.188 | N/A (booted from volume) | SCS-1L-1 | 2025-09-29 06:49:09.379536 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | auto_allocated_network=10.42.0.10, 192.168.112.122 | N/A (booted from volume) | SCS-1L-1 | 2025-09-29 06:49:09.379540 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.154 | N/A (booted from volume) | SCS-1L-1 | 2025-09-29 06:49:09.379544 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-29 06:49:09.527877 | orchestrator | + openstack --os-cloud test server show test 2025-09-29 06:49:12.567022 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:12.567152 | orchestrator | | Field | Value | 2025-09-29 06:49:12.567171 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:12.567183 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-29 06:49:12.567195 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-29 06:49:12.567206 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-29 06:49:12.567218 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-29 06:49:12.567254 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-29 06:49:12.567266 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-29 06:49:12.567296 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-29 06:49:12.567308 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-29 06:49:12.567320 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-29 06:49:12.567331 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-29 06:49:12.567342 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-29 06:49:12.567353 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-29 06:49:12.567364 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-29 06:49:12.567376 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-29 06:49:12.567399 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-29 06:49:12.567410 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-29T06:44:18.000000 | 2025-09-29 06:49:12.567429 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-29 06:49:12.567441 | orchestrator | | accessIPv4 | | 2025-09-29 06:49:12.567452 | orchestrator | | accessIPv6 | | 2025-09-29 06:49:12.567463 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.154 | 2025-09-29 06:49:12.567475 | orchestrator | | config_drive | | 2025-09-29 06:49:12.567488 | orchestrator | | created | 2025-09-29T06:43:42Z | 2025-09-29 06:49:12.567501 | orchestrator | | description | None | 2025-09-29 06:49:12.567525 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-29 06:49:12.567539 | orchestrator | | hostId | 422af15d6a9064f4d4b454a1bffdd412b3b4cd0a7c3c8b3add6ed42f | 2025-09-29 06:49:12.567553 | orchestrator | | host_status | None | 2025-09-29 06:49:12.567573 | orchestrator | | id | 955c420a-2ed9-4199-812a-ecd4f86250b2 | 2025-09-29 06:49:12.567586 | orchestrator | | image | N/A (booted from volume) | 2025-09-29 06:49:12.567599 | orchestrator | | key_name | test | 2025-09-29 06:49:12.567618 | orchestrator | | locked | False | 2025-09-29 06:49:12.567631 | orchestrator | | locked_reason | None | 2025-09-29 06:49:12.567644 | orchestrator | | name | test | 2025-09-29 06:49:12.567663 | orchestrator | | pinned_availability_zone | None | 2025-09-29 06:49:12.567681 | orchestrator | | progress | 0 | 2025-09-29 06:49:12.567694 | orchestrator | | project_id | 0c0f552b84954a0b9257d4956f8c90b5 | 2025-09-29 06:49:12.567707 | orchestrator | | properties | hostname='test' | 2025-09-29 06:49:12.567727 | orchestrator | | security_groups | name='icmp' | 2025-09-29 06:49:12.567741 | orchestrator | | | name='ssh' | 2025-09-29 06:49:12.567753 | orchestrator | | server_groups | None | 2025-09-29 06:49:12.567766 | orchestrator | | status | ACTIVE | 2025-09-29 06:49:12.567779 | orchestrator | | tags | test | 2025-09-29 06:49:12.567797 | orchestrator | | trusted_image_certificates | None | 2025-09-29 06:49:12.567810 | orchestrator | | updated | 2025-09-29T06:47:50Z | 2025-09-29 06:49:12.567828 | orchestrator | | user_id | 12a36ea0ee234ff19beb3838013b90d7 | 2025-09-29 06:49:12.567842 | orchestrator | | volumes_attached | delete_on_termination='True', id='782817fa-806b-4d10-a4fa-ce9c49ead3d6' | 2025-09-29 06:49:12.567854 | orchestrator | | | delete_on_termination='False', id='1495a64c-c4d2-4730-932b-687035b3cbe3' | 2025-09-29 06:49:12.569127 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:12.728503 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-29 06:49:15.741492 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:15.741648 | orchestrator | | Field | Value | 2025-09-29 06:49:15.741679 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:15.741757 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-29 06:49:15.741834 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-29 06:49:15.741858 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-29 06:49:15.741898 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-29 06:49:15.741918 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-29 06:49:15.742090 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-29 06:49:15.742145 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-29 06:49:15.742166 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-29 06:49:15.742185 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-29 06:49:15.742203 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-29 06:49:15.742242 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-29 06:49:15.742262 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-29 06:49:15.742280 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-29 06:49:15.742308 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-29 06:49:15.742328 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-29 06:49:15.742348 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-29T06:45:06.000000 | 2025-09-29 06:49:15.742378 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-29 06:49:15.742397 | orchestrator | | accessIPv4 | | 2025-09-29 06:49:15.742415 | orchestrator | | accessIPv6 | | 2025-09-29 06:49:15.742456 | orchestrator | | addresses | auto_allocated_network=10.42.0.10, 192.168.112.122 | 2025-09-29 06:49:15.742475 | orchestrator | | config_drive | | 2025-09-29 06:49:15.742494 | orchestrator | | created | 2025-09-29T06:44:31Z | 2025-09-29 06:49:15.742511 | orchestrator | | description | None | 2025-09-29 06:49:15.742529 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-29 06:49:15.742548 | orchestrator | | hostId | 65812d459b2ffd4e7244f76196b66bd9c824a4a73018f0960f0b1a7a | 2025-09-29 06:49:15.742569 | orchestrator | | host_status | None | 2025-09-29 06:49:15.742600 | orchestrator | | id | d3dba11d-3892-44e1-b860-f8269185e142 | 2025-09-29 06:49:15.742619 | orchestrator | | image | N/A (booted from volume) | 2025-09-29 06:49:15.742649 | orchestrator | | key_name | test | 2025-09-29 06:49:15.742668 | orchestrator | | locked | False | 2025-09-29 06:49:15.742699 | orchestrator | | locked_reason | None | 2025-09-29 06:49:15.742720 | orchestrator | | name | test-1 | 2025-09-29 06:49:15.742740 | orchestrator | | pinned_availability_zone | None | 2025-09-29 06:49:15.742765 | orchestrator | | progress | 0 | 2025-09-29 06:49:15.742784 | orchestrator | | project_id | 0c0f552b84954a0b9257d4956f8c90b5 | 2025-09-29 06:49:15.742803 | orchestrator | | properties | hostname='test-1' | 2025-09-29 06:49:15.742834 | orchestrator | | security_groups | name='icmp' | 2025-09-29 06:49:15.742865 | orchestrator | | | name='ssh' | 2025-09-29 06:49:15.742878 | orchestrator | | server_groups | None | 2025-09-29 06:49:15.742889 | orchestrator | | status | ACTIVE | 2025-09-29 06:49:15.742900 | orchestrator | | tags | test | 2025-09-29 06:49:15.742911 | orchestrator | | trusted_image_certificates | None | 2025-09-29 06:49:15.742957 | orchestrator | | updated | 2025-09-29T06:47:54Z | 2025-09-29 06:49:15.742985 | orchestrator | | user_id | 12a36ea0ee234ff19beb3838013b90d7 | 2025-09-29 06:49:15.743005 | orchestrator | | volumes_attached | delete_on_termination='True', id='9b21f9e3-f4a6-4a4f-902a-b71845484328' | 2025-09-29 06:49:15.743420 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:15.879975 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-29 06:49:19.203764 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:19.203891 | orchestrator | | Field | Value | 2025-09-29 06:49:19.203910 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:19.203964 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-29 06:49:19.203978 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-29 06:49:19.203989 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-29 06:49:19.204019 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-29 06:49:19.204031 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-29 06:49:19.204042 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-29 06:49:19.204073 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-29 06:49:19.204115 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-29 06:49:19.204129 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-29 06:49:19.204141 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-29 06:49:19.204152 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-29 06:49:19.204163 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-29 06:49:19.204174 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-29 06:49:19.204192 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-29 06:49:19.204229 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-29 06:49:19.204241 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-29T06:46:04.000000 | 2025-09-29 06:49:19.204271 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-29 06:49:19.204285 | orchestrator | | accessIPv4 | | 2025-09-29 06:49:19.204299 | orchestrator | | accessIPv6 | | 2025-09-29 06:49:19.204312 | orchestrator | | addresses | auto_allocated_network=10.42.0.23, 192.168.112.188 | 2025-09-29 06:49:19.204325 | orchestrator | | config_drive | | 2025-09-29 06:49:19.204337 | orchestrator | | created | 2025-09-29T06:45:27Z | 2025-09-29 06:49:19.204348 | orchestrator | | description | None | 2025-09-29 06:49:19.204365 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-29 06:49:19.204376 | orchestrator | | hostId | 195bef9116d9e0bb2da9f73c2d2bc3f8b1f80c4c44a22f8a147e4ff8 | 2025-09-29 06:49:19.204394 | orchestrator | | host_status | None | 2025-09-29 06:49:19.204413 | orchestrator | | id | 61f9dca3-f759-42be-9bf5-b83588e0542c | 2025-09-29 06:49:19.204425 | orchestrator | | image | N/A (booted from volume) | 2025-09-29 06:49:19.204436 | orchestrator | | key_name | test | 2025-09-29 06:49:19.204448 | orchestrator | | locked | False | 2025-09-29 06:49:19.204459 | orchestrator | | locked_reason | None | 2025-09-29 06:49:19.204470 | orchestrator | | name | test-2 | 2025-09-29 06:49:19.204482 | orchestrator | | pinned_availability_zone | None | 2025-09-29 06:49:19.204493 | orchestrator | | progress | 0 | 2025-09-29 06:49:19.204511 | orchestrator | | project_id | 0c0f552b84954a0b9257d4956f8c90b5 | 2025-09-29 06:49:19.204522 | orchestrator | | properties | hostname='test-2' | 2025-09-29 06:49:19.204541 | orchestrator | | security_groups | name='icmp' | 2025-09-29 06:49:19.204553 | orchestrator | | | name='ssh' | 2025-09-29 06:49:19.204571 | orchestrator | | server_groups | None | 2025-09-29 06:49:19.204583 | orchestrator | | status | ACTIVE | 2025-09-29 06:49:19.204594 | orchestrator | | tags | test | 2025-09-29 06:49:19.204605 | orchestrator | | trusted_image_certificates | None | 2025-09-29 06:49:19.204617 | orchestrator | | updated | 2025-09-29T06:47:59Z | 2025-09-29 06:49:19.204633 | orchestrator | | user_id | 12a36ea0ee234ff19beb3838013b90d7 | 2025-09-29 06:49:19.204658 | orchestrator | | volumes_attached | delete_on_termination='True', id='a280e41b-ced2-4329-aef5-e0edc3c16e39' | 2025-09-29 06:49:19.207296 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:19.463215 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-29 06:49:22.507009 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:22.507128 | orchestrator | | Field | Value | 2025-09-29 06:49:22.507150 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:22.507170 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-29 06:49:22.507189 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-29 06:49:22.507203 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-29 06:49:22.507214 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-29 06:49:22.507265 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-29 06:49:22.507277 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-29 06:49:22.507318 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-29 06:49:22.507330 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-29 06:49:22.507340 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-29 06:49:22.507351 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-29 06:49:22.507361 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-29 06:49:22.507371 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-29 06:49:22.507381 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-29 06:49:22.507407 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-29 06:49:22.507439 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-29 06:49:22.507461 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-29T06:46:48.000000 | 2025-09-29 06:49:22.507489 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-29 06:49:22.507506 | orchestrator | | accessIPv4 | | 2025-09-29 06:49:22.507522 | orchestrator | | accessIPv6 | | 2025-09-29 06:49:22.507539 | orchestrator | | addresses | auto_allocated_network=10.42.0.51, 192.168.112.108 | 2025-09-29 06:49:22.507556 | orchestrator | | config_drive | | 2025-09-29 06:49:22.507574 | orchestrator | | created | 2025-09-29T06:46:22Z | 2025-09-29 06:49:22.507604 | orchestrator | | description | None | 2025-09-29 06:49:22.507623 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-29 06:49:22.507648 | orchestrator | | hostId | 422af15d6a9064f4d4b454a1bffdd412b3b4cd0a7c3c8b3add6ed42f | 2025-09-29 06:49:22.507666 | orchestrator | | host_status | None | 2025-09-29 06:49:22.507693 | orchestrator | | id | b63aca68-3262-43be-aa6a-a9d0ecbda32e | 2025-09-29 06:49:22.507704 | orchestrator | | image | N/A (booted from volume) | 2025-09-29 06:49:22.507714 | orchestrator | | key_name | test | 2025-09-29 06:49:22.507724 | orchestrator | | locked | False | 2025-09-29 06:49:22.507734 | orchestrator | | locked_reason | None | 2025-09-29 06:49:22.507750 | orchestrator | | name | test-3 | 2025-09-29 06:49:22.507760 | orchestrator | | pinned_availability_zone | None | 2025-09-29 06:49:22.507775 | orchestrator | | progress | 0 | 2025-09-29 06:49:22.507785 | orchestrator | | project_id | 0c0f552b84954a0b9257d4956f8c90b5 | 2025-09-29 06:49:22.507795 | orchestrator | | properties | hostname='test-3' | 2025-09-29 06:49:22.507812 | orchestrator | | security_groups | name='icmp' | 2025-09-29 06:49:22.507822 | orchestrator | | | name='ssh' | 2025-09-29 06:49:22.507832 | orchestrator | | server_groups | None | 2025-09-29 06:49:22.507842 | orchestrator | | status | ACTIVE | 2025-09-29 06:49:22.507852 | orchestrator | | tags | test | 2025-09-29 06:49:22.507868 | orchestrator | | trusted_image_certificates | None | 2025-09-29 06:49:22.507878 | orchestrator | | updated | 2025-09-29T06:48:03Z | 2025-09-29 06:49:22.507893 | orchestrator | | user_id | 12a36ea0ee234ff19beb3838013b90d7 | 2025-09-29 06:49:22.507903 | orchestrator | | volumes_attached | delete_on_termination='True', id='c448074b-ad2d-460b-88b8-34958116aa18' | 2025-09-29 06:49:22.511026 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:22.776853 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-29 06:49:25.707284 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:25.707363 | orchestrator | | Field | Value | 2025-09-29 06:49:25.707371 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:25.707378 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-29 06:49:25.707397 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-29 06:49:25.707403 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-29 06:49:25.707409 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-29 06:49:25.707422 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-29 06:49:25.707428 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-29 06:49:25.707444 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-29 06:49:25.707451 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-29 06:49:25.707457 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-29 06:49:25.707463 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-29 06:49:25.707474 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-29 06:49:25.707483 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-29 06:49:25.707493 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-29 06:49:25.707502 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-29 06:49:25.707531 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-29 06:49:25.707541 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-29T06:47:33.000000 | 2025-09-29 06:49:25.707556 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-29 06:49:25.707565 | orchestrator | | accessIPv4 | | 2025-09-29 06:49:25.707575 | orchestrator | | accessIPv6 | | 2025-09-29 06:49:25.707586 | orchestrator | | addresses | auto_allocated_network=10.42.0.50, 192.168.112.112 | 2025-09-29 06:49:25.707591 | orchestrator | | config_drive | | 2025-09-29 06:49:25.707597 | orchestrator | | created | 2025-09-29T06:47:07Z | 2025-09-29 06:49:25.707603 | orchestrator | | description | None | 2025-09-29 06:49:25.707608 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-29 06:49:25.707616 | orchestrator | | hostId | 195bef9116d9e0bb2da9f73c2d2bc3f8b1f80c4c44a22f8a147e4ff8 | 2025-09-29 06:49:25.707622 | orchestrator | | host_status | None | 2025-09-29 06:49:25.707632 | orchestrator | | id | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | 2025-09-29 06:49:25.707638 | orchestrator | | image | N/A (booted from volume) | 2025-09-29 06:49:25.707643 | orchestrator | | key_name | test | 2025-09-29 06:49:25.707653 | orchestrator | | locked | False | 2025-09-29 06:49:25.707659 | orchestrator | | locked_reason | None | 2025-09-29 06:49:25.707665 | orchestrator | | name | test-4 | 2025-09-29 06:49:25.707670 | orchestrator | | pinned_availability_zone | None | 2025-09-29 06:49:25.707676 | orchestrator | | progress | 0 | 2025-09-29 06:49:25.707683 | orchestrator | | project_id | 0c0f552b84954a0b9257d4956f8c90b5 | 2025-09-29 06:49:25.707689 | orchestrator | | properties | hostname='test-4' | 2025-09-29 06:49:25.707700 | orchestrator | | security_groups | name='icmp' | 2025-09-29 06:49:25.707706 | orchestrator | | | name='ssh' | 2025-09-29 06:49:25.707714 | orchestrator | | server_groups | None | 2025-09-29 06:49:25.707720 | orchestrator | | status | ACTIVE | 2025-09-29 06:49:25.707725 | orchestrator | | tags | test | 2025-09-29 06:49:25.707731 | orchestrator | | trusted_image_certificates | None | 2025-09-29 06:49:25.707736 | orchestrator | | updated | 2025-09-29T06:48:08Z | 2025-09-29 06:49:25.707742 | orchestrator | | user_id | 12a36ea0ee234ff19beb3838013b90d7 | 2025-09-29 06:49:25.707750 | orchestrator | | volumes_attached | delete_on_termination='True', id='a32b9ac1-99bb-4da7-b452-1b5aa93e58b5' | 2025-09-29 06:49:25.709570 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-29 06:49:25.876414 | orchestrator | + server_ping 2025-09-29 06:49:25.877879 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-29 06:49:25.878003 | orchestrator | ++ tr -d '\r' 2025-09-29 06:49:28.376713 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:49:28.376851 | orchestrator | + ping -c3 192.168.112.122 2025-09-29 06:49:28.393546 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-09-29 06:49:28.393632 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=9.08 ms 2025-09-29 06:49:29.388141 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.45 ms 2025-09-29 06:49:30.389166 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.34 ms 2025-09-29 06:49:30.389284 | orchestrator | 2025-09-29 06:49:30.389308 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-09-29 06:49:30.389327 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:49:30.389342 | orchestrator | rtt min/avg/max/mdev = 1.337/4.287/9.076/3.416 ms 2025-09-29 06:49:30.389994 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:49:30.390095 | orchestrator | + ping -c3 192.168.112.154 2025-09-29 06:49:30.401173 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-09-29 06:49:30.401266 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=6.59 ms 2025-09-29 06:49:31.398786 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.07 ms 2025-09-29 06:49:32.400206 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=2.01 ms 2025-09-29 06:49:32.400334 | orchestrator | 2025-09-29 06:49:32.400359 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-09-29 06:49:32.400378 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:49:32.400394 | orchestrator | rtt min/avg/max/mdev = 2.012/3.557/6.591/2.145 ms 2025-09-29 06:49:32.401404 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:49:32.401467 | orchestrator | + ping -c3 192.168.112.108 2025-09-29 06:49:32.414086 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-09-29 06:49:32.414170 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.80 ms 2025-09-29 06:49:33.409112 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-29 06:49:34.410379 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.24 ms 2025-09-29 06:49:34.410468 | orchestrator | 2025-09-29 06:49:34.410480 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-09-29 06:49:34.410492 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-29 06:49:34.410501 | orchestrator | rtt min/avg/max/mdev = 2.242/4.560/8.799/3.001 ms 2025-09-29 06:49:34.410511 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:49:34.410521 | orchestrator | + ping -c3 192.168.112.188 2025-09-29 06:49:34.423525 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-29 06:49:34.423617 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.45 ms 2025-09-29 06:49:35.418622 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.51 ms 2025-09-29 06:49:36.420887 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.16 ms 2025-09-29 06:49:36.421047 | orchestrator | 2025-09-29 06:49:36.421065 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-29 06:49:36.421075 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:49:36.421083 | orchestrator | rtt min/avg/max/mdev = 2.161/4.373/8.446/2.883 ms 2025-09-29 06:49:36.421091 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:49:36.421099 | orchestrator | + ping -c3 192.168.112.112 2025-09-29 06:49:36.433213 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-09-29 06:49:36.433317 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.46 ms 2025-09-29 06:49:37.430006 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.62 ms 2025-09-29 06:49:38.431110 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.77 ms 2025-09-29 06:49:38.431183 | orchestrator | 2025-09-29 06:49:38.431190 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-09-29 06:49:38.431196 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:49:38.431219 | orchestrator | rtt min/avg/max/mdev = 1.772/3.952/7.462/2.505 ms 2025-09-29 06:49:38.431667 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-29 06:49:38.431685 | orchestrator | + compute_list 2025-09-29 06:49:38.431690 | orchestrator | + osism manage compute list testbed-node-3 2025-09-29 06:49:41.592276 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:41.592384 | orchestrator | | ID | Name | Status | 2025-09-29 06:49:41.592399 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:49:41.592411 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | 2025-09-29 06:49:41.592422 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | 2025-09-29 06:49:41.592453 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:41.774549 | orchestrator | + osism manage compute list testbed-node-4 2025-09-29 06:49:45.202095 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:45.202167 | orchestrator | | ID | Name | Status | 2025-09-29 06:49:45.202173 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:49:45.202178 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | 2025-09-29 06:49:45.202182 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:45.394147 | orchestrator | + osism manage compute list testbed-node-5 2025-09-29 06:49:48.652727 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:48.652835 | orchestrator | | ID | Name | Status | 2025-09-29 06:49:48.652845 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:49:48.652852 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | 2025-09-29 06:49:48.652860 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | 2025-09-29 06:49:48.652867 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:49:48.972894 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-29 06:49:52.390381 | orchestrator | 2025-09-29 06:49:52 | INFO  | Live migrating server d3dba11d-3892-44e1-b860-f8269185e142 2025-09-29 06:50:05.559753 | orchestrator | 2025-09-29 06:50:05 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:08.033222 | orchestrator | 2025-09-29 06:50:08 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:10.431288 | orchestrator | 2025-09-29 06:50:10 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:12.739533 | orchestrator | 2025-09-29 06:50:12 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:15.134373 | orchestrator | 2025-09-29 06:50:15 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:17.456462 | orchestrator | 2025-09-29 06:50:17 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:20.038251 | orchestrator | 2025-09-29 06:50:20 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:22.336550 | orchestrator | 2025-09-29 06:50:22 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:24.653396 | orchestrator | 2025-09-29 06:50:24 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:50:27.037453 | orchestrator | 2025-09-29 06:50:27 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) completed with status ACTIVE 2025-09-29 06:50:27.232422 | orchestrator | + compute_list 2025-09-29 06:50:27.232547 | orchestrator | + osism manage compute list testbed-node-3 2025-09-29 06:50:30.275420 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:50:30.275583 | orchestrator | | ID | Name | Status | 2025-09-29 06:50:30.275600 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:50:30.275612 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | 2025-09-29 06:50:30.275623 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | 2025-09-29 06:50:30.275634 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | 2025-09-29 06:50:30.275645 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:50:30.594356 | orchestrator | + osism manage compute list testbed-node-4 2025-09-29 06:50:33.398342 | orchestrator | +------+--------+----------+ 2025-09-29 06:50:33.398470 | orchestrator | | ID | Name | Status | 2025-09-29 06:50:33.398496 | orchestrator | |------+--------+----------| 2025-09-29 06:50:33.398516 | orchestrator | +------+--------+----------+ 2025-09-29 06:50:33.725194 | orchestrator | + osism manage compute list testbed-node-5 2025-09-29 06:50:36.807991 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:50:36.808105 | orchestrator | | ID | Name | Status | 2025-09-29 06:50:36.808124 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:50:36.808141 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | 2025-09-29 06:50:36.808157 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | 2025-09-29 06:50:36.808173 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:50:37.105096 | orchestrator | + server_ping 2025-09-29 06:50:37.106197 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-29 06:50:37.106471 | orchestrator | ++ tr -d '\r' 2025-09-29 06:50:39.939004 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:50:39.939121 | orchestrator | + ping -c3 192.168.112.122 2025-09-29 06:50:39.952826 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-09-29 06:50:39.952949 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=9.54 ms 2025-09-29 06:50:40.948070 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.90 ms 2025-09-29 06:50:41.948863 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.01 ms 2025-09-29 06:50:41.949065 | orchestrator | 2025-09-29 06:50:41.949097 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-09-29 06:50:41.949119 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:50:41.949163 | orchestrator | rtt min/avg/max/mdev = 2.008/4.815/9.536/3.357 ms 2025-09-29 06:50:41.949187 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:50:41.949207 | orchestrator | + ping -c3 192.168.112.154 2025-09-29 06:50:41.963353 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-09-29 06:50:41.963479 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=9.46 ms 2025-09-29 06:50:42.958407 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.82 ms 2025-09-29 06:50:43.960014 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.72 ms 2025-09-29 06:50:43.960119 | orchestrator | 2025-09-29 06:50:43.960135 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-09-29 06:50:43.960149 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:50:43.960160 | orchestrator | rtt min/avg/max/mdev = 1.720/4.664/9.455/3.417 ms 2025-09-29 06:50:43.960199 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:50:43.960212 | orchestrator | + ping -c3 192.168.112.108 2025-09-29 06:50:43.974963 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-09-29 06:50:43.975047 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=10.5 ms 2025-09-29 06:50:44.968044 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.46 ms 2025-09-29 06:50:45.969300 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.91 ms 2025-09-29 06:50:45.969407 | orchestrator | 2025-09-29 06:50:45.969424 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-09-29 06:50:45.969437 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:50:45.969475 | orchestrator | rtt min/avg/max/mdev = 1.914/4.963/10.516/3.932 ms 2025-09-29 06:50:45.969729 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:50:45.969762 | orchestrator | + ping -c3 192.168.112.188 2025-09-29 06:50:45.983245 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-29 06:50:45.983349 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.84 ms 2025-09-29 06:50:46.979355 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-29 06:50:47.980802 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.09 ms 2025-09-29 06:50:47.980936 | orchestrator | 2025-09-29 06:50:47.980962 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-29 06:50:47.980980 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:50:47.980998 | orchestrator | rtt min/avg/max/mdev = 2.093/4.522/8.838/3.059 ms 2025-09-29 06:50:47.981016 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:50:47.981033 | orchestrator | + ping -c3 192.168.112.112 2025-09-29 06:50:47.993955 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-09-29 06:50:47.994135 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.99 ms 2025-09-29 06:50:48.988347 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.22 ms 2025-09-29 06:50:49.989038 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.71 ms 2025-09-29 06:50:49.989134 | orchestrator | 2025-09-29 06:50:49.989151 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-09-29 06:50:49.989165 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2025-09-29 06:50:49.989176 | orchestrator | rtt min/avg/max/mdev = 1.710/3.972/7.986/2.845 ms 2025-09-29 06:50:49.989200 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-29 06:50:52.898702 | orchestrator | 2025-09-29 06:50:52 | INFO  | Live migrating server 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 2025-09-29 06:51:05.354822 | orchestrator | 2025-09-29 06:51:05 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:07.746500 | orchestrator | 2025-09-29 06:51:07 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:10.110447 | orchestrator | 2025-09-29 06:51:10 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:12.491289 | orchestrator | 2025-09-29 06:51:12 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:14.826264 | orchestrator | 2025-09-29 06:51:14 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:17.165074 | orchestrator | 2025-09-29 06:51:17 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:19.744050 | orchestrator | 2025-09-29 06:51:19 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:22.113732 | orchestrator | 2025-09-29 06:51:22 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:24.422184 | orchestrator | 2025-09-29 06:51:24 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:51:26.731202 | orchestrator | 2025-09-29 06:51:26 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) completed with status ACTIVE 2025-09-29 06:51:26.731342 | orchestrator | 2025-09-29 06:51:26 | INFO  | Live migrating server 61f9dca3-f759-42be-9bf5-b83588e0542c 2025-09-29 06:51:38.988212 | orchestrator | 2025-09-29 06:51:38 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:41.373972 | orchestrator | 2025-09-29 06:51:41 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:43.764695 | orchestrator | 2025-09-29 06:51:43 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:46.057062 | orchestrator | 2025-09-29 06:51:46 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:48.323087 | orchestrator | 2025-09-29 06:51:48 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:50.646468 | orchestrator | 2025-09-29 06:51:50 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:52.989687 | orchestrator | 2025-09-29 06:51:52 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:55.325855 | orchestrator | 2025-09-29 06:51:55 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:51:57.643766 | orchestrator | 2025-09-29 06:51:57 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) completed with status ACTIVE 2025-09-29 06:51:57.833605 | orchestrator | + compute_list 2025-09-29 06:51:57.833689 | orchestrator | + osism manage compute list testbed-node-3 2025-09-29 06:52:00.872425 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:52:00.873047 | orchestrator | | ID | Name | Status | 2025-09-29 06:52:00.873084 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:52:00.873098 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | 2025-09-29 06:52:00.873109 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | 2025-09-29 06:52:00.873121 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | 2025-09-29 06:52:00.873132 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | 2025-09-29 06:52:00.873143 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | 2025-09-29 06:52:00.873154 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:52:01.056501 | orchestrator | + osism manage compute list testbed-node-4 2025-09-29 06:52:03.549631 | orchestrator | +------+--------+----------+ 2025-09-29 06:52:03.550563 | orchestrator | | ID | Name | Status | 2025-09-29 06:52:03.550604 | orchestrator | |------+--------+----------| 2025-09-29 06:52:03.550620 | orchestrator | +------+--------+----------+ 2025-09-29 06:52:03.747143 | orchestrator | + osism manage compute list testbed-node-5 2025-09-29 06:52:06.430673 | orchestrator | +------+--------+----------+ 2025-09-29 06:52:06.430812 | orchestrator | | ID | Name | Status | 2025-09-29 06:52:06.430837 | orchestrator | |------+--------+----------| 2025-09-29 06:52:06.430856 | orchestrator | +------+--------+----------+ 2025-09-29 06:52:06.726329 | orchestrator | + server_ping 2025-09-29 06:52:06.727115 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-29 06:52:06.727198 | orchestrator | ++ tr -d '\r' 2025-09-29 06:52:09.535228 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:52:09.536300 | orchestrator | + ping -c3 192.168.112.122 2025-09-29 06:52:09.545783 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-09-29 06:52:09.545923 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=8.88 ms 2025-09-29 06:52:10.541087 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.53 ms 2025-09-29 06:52:11.542992 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.19 ms 2025-09-29 06:52:11.543086 | orchestrator | 2025-09-29 06:52:11.543101 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-09-29 06:52:11.543113 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:52:11.543123 | orchestrator | rtt min/avg/max/mdev = 2.192/4.534/8.883/3.078 ms 2025-09-29 06:52:11.543134 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:52:11.543170 | orchestrator | + ping -c3 192.168.112.154 2025-09-29 06:52:11.555838 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-09-29 06:52:11.555941 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=8.23 ms 2025-09-29 06:52:12.552233 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.61 ms 2025-09-29 06:52:13.553598 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=2.17 ms 2025-09-29 06:52:13.553682 | orchestrator | 2025-09-29 06:52:13.553694 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-09-29 06:52:13.553704 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:52:13.553713 | orchestrator | rtt min/avg/max/mdev = 2.170/4.335/8.228/2.758 ms 2025-09-29 06:52:13.554515 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:52:13.554550 | orchestrator | + ping -c3 192.168.112.108 2025-09-29 06:52:13.566484 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-09-29 06:52:13.566560 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=7.69 ms 2025-09-29 06:52:14.563397 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.56 ms 2025-09-29 06:52:15.564203 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.10 ms 2025-09-29 06:52:15.564288 | orchestrator | 2025-09-29 06:52:15.564303 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-09-29 06:52:15.564316 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:52:15.564327 | orchestrator | rtt min/avg/max/mdev = 2.097/4.115/7.687/2.532 ms 2025-09-29 06:52:15.564670 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:52:15.564687 | orchestrator | + ping -c3 192.168.112.188 2025-09-29 06:52:15.576074 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-29 06:52:15.576145 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.92 ms 2025-09-29 06:52:16.573769 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.42 ms 2025-09-29 06:52:17.574104 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.83 ms 2025-09-29 06:52:17.574168 | orchestrator | 2025-09-29 06:52:17.574176 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-29 06:52:17.574181 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:52:17.574186 | orchestrator | rtt min/avg/max/mdev = 1.833/3.727/6.924/2.273 ms 2025-09-29 06:52:17.574191 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:52:17.574195 | orchestrator | + ping -c3 192.168.112.112 2025-09-29 06:52:17.581672 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-09-29 06:52:17.581745 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=4.65 ms 2025-09-29 06:52:18.581237 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.45 ms 2025-09-29 06:52:19.583096 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.78 ms 2025-09-29 06:52:19.583987 | orchestrator | 2025-09-29 06:52:19.584015 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-09-29 06:52:19.584025 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:52:19.584033 | orchestrator | rtt min/avg/max/mdev = 1.775/2.957/4.648/1.226 ms 2025-09-29 06:52:19.584053 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-29 06:52:22.851304 | orchestrator | 2025-09-29 06:52:22 | INFO  | Live migrating server 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 2025-09-29 06:52:34.297350 | orchestrator | 2025-09-29 06:52:34 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:36.742711 | orchestrator | 2025-09-29 06:52:36 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:39.149303 | orchestrator | 2025-09-29 06:52:39 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:41.534772 | orchestrator | 2025-09-29 06:52:41 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:43.822831 | orchestrator | 2025-09-29 06:52:43 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:46.209930 | orchestrator | 2025-09-29 06:52:46 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:48.591349 | orchestrator | 2025-09-29 06:52:48 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:50.997021 | orchestrator | 2025-09-29 06:52:50 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:52:53.286411 | orchestrator | 2025-09-29 06:52:53 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) completed with status ACTIVE 2025-09-29 06:52:53.286514 | orchestrator | 2025-09-29 06:52:53 | INFO  | Live migrating server b63aca68-3262-43be-aa6a-a9d0ecbda32e 2025-09-29 06:53:05.577859 | orchestrator | 2025-09-29 06:53:05 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:07.919987 | orchestrator | 2025-09-29 06:53:07 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:10.253847 | orchestrator | 2025-09-29 06:53:10 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:12.624618 | orchestrator | 2025-09-29 06:53:12 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:14.970596 | orchestrator | 2025-09-29 06:53:14 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:17.336086 | orchestrator | 2025-09-29 06:53:17 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:19.621708 | orchestrator | 2025-09-29 06:53:19 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:21.936632 | orchestrator | 2025-09-29 06:53:21 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:53:24.224350 | orchestrator | 2025-09-29 06:53:24 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) completed with status ACTIVE 2025-09-29 06:53:24.224456 | orchestrator | 2025-09-29 06:53:24 | INFO  | Live migrating server 61f9dca3-f759-42be-9bf5-b83588e0542c 2025-09-29 06:53:35.936298 | orchestrator | 2025-09-29 06:53:35 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:38.247815 | orchestrator | 2025-09-29 06:53:38 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:40.595448 | orchestrator | 2025-09-29 06:53:40 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:42.913293 | orchestrator | 2025-09-29 06:53:42 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:45.240838 | orchestrator | 2025-09-29 06:53:45 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:47.583296 | orchestrator | 2025-09-29 06:53:47 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:49.926944 | orchestrator | 2025-09-29 06:53:49 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:52.201851 | orchestrator | 2025-09-29 06:53:52 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:53:54.469647 | orchestrator | 2025-09-29 06:53:54 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) completed with status ACTIVE 2025-09-29 06:53:54.470604 | orchestrator | 2025-09-29 06:53:54 | INFO  | Live migrating server d3dba11d-3892-44e1-b860-f8269185e142 2025-09-29 06:54:04.855141 | orchestrator | 2025-09-29 06:54:04 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:07.328285 | orchestrator | 2025-09-29 06:54:07 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:09.632900 | orchestrator | 2025-09-29 06:54:09 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:12.130918 | orchestrator | 2025-09-29 06:54:12 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:14.390443 | orchestrator | 2025-09-29 06:54:14 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:16.793791 | orchestrator | 2025-09-29 06:54:16 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:19.132404 | orchestrator | 2025-09-29 06:54:19 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:21.463318 | orchestrator | 2025-09-29 06:54:21 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:54:23.714823 | orchestrator | 2025-09-29 06:54:23 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) completed with status ACTIVE 2025-09-29 06:54:23.714928 | orchestrator | 2025-09-29 06:54:23 | INFO  | Live migrating server 955c420a-2ed9-4199-812a-ecd4f86250b2 2025-09-29 06:54:33.157421 | orchestrator | 2025-09-29 06:54:33 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:35.616390 | orchestrator | 2025-09-29 06:54:35 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:37.945130 | orchestrator | 2025-09-29 06:54:37 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:40.292572 | orchestrator | 2025-09-29 06:54:40 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:42.636700 | orchestrator | 2025-09-29 06:54:42 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:44.982679 | orchestrator | 2025-09-29 06:54:44 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:47.301059 | orchestrator | 2025-09-29 06:54:47 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:49.626615 | orchestrator | 2025-09-29 06:54:49 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:51.899252 | orchestrator | 2025-09-29 06:54:51 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:54:54.203689 | orchestrator | 2025-09-29 06:54:54 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) completed with status ACTIVE 2025-09-29 06:54:54.506066 | orchestrator | + compute_list 2025-09-29 06:54:54.506222 | orchestrator | + osism manage compute list testbed-node-3 2025-09-29 06:54:57.231230 | orchestrator | +------+--------+----------+ 2025-09-29 06:54:57.231333 | orchestrator | | ID | Name | Status | 2025-09-29 06:54:57.231373 | orchestrator | |------+--------+----------| 2025-09-29 06:54:57.231385 | orchestrator | +------+--------+----------+ 2025-09-29 06:54:57.517263 | orchestrator | + osism manage compute list testbed-node-4 2025-09-29 06:55:00.666490 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:55:00.666627 | orchestrator | | ID | Name | Status | 2025-09-29 06:55:00.666645 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:55:00.666657 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | 2025-09-29 06:55:00.666668 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | 2025-09-29 06:55:00.666679 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | 2025-09-29 06:55:00.666690 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | 2025-09-29 06:55:00.666701 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | 2025-09-29 06:55:00.666712 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:55:00.936437 | orchestrator | + osism manage compute list testbed-node-5 2025-09-29 06:55:03.427722 | orchestrator | +------+--------+----------+ 2025-09-29 06:55:03.427832 | orchestrator | | ID | Name | Status | 2025-09-29 06:55:03.427849 | orchestrator | |------+--------+----------| 2025-09-29 06:55:03.427861 | orchestrator | +------+--------+----------+ 2025-09-29 06:55:03.639849 | orchestrator | + server_ping 2025-09-29 06:55:03.640384 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-29 06:55:03.640964 | orchestrator | ++ tr -d '\r' 2025-09-29 06:55:06.268082 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:55:06.268185 | orchestrator | + ping -c3 192.168.112.122 2025-09-29 06:55:06.278198 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-09-29 06:55:06.278250 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=7.64 ms 2025-09-29 06:55:07.274759 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.08 ms 2025-09-29 06:55:08.276054 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.75 ms 2025-09-29 06:55:08.276175 | orchestrator | 2025-09-29 06:55:08.276206 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-09-29 06:55:08.276228 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:55:08.276247 | orchestrator | rtt min/avg/max/mdev = 1.745/3.820/7.638/2.702 ms 2025-09-29 06:55:08.276807 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:55:08.276831 | orchestrator | + ping -c3 192.168.112.154 2025-09-29 06:55:08.285483 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-09-29 06:55:08.285547 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=6.07 ms 2025-09-29 06:55:09.283253 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=1.97 ms 2025-09-29 06:55:10.283871 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.51 ms 2025-09-29 06:55:10.283951 | orchestrator | 2025-09-29 06:55:10.283963 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-09-29 06:55:10.283972 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:55:10.283981 | orchestrator | rtt min/avg/max/mdev = 1.506/3.181/6.071/2.051 ms 2025-09-29 06:55:10.284240 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:55:10.284258 | orchestrator | + ping -c3 192.168.112.108 2025-09-29 06:55:10.292145 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-09-29 06:55:10.292172 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=4.83 ms 2025-09-29 06:55:11.292254 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.31 ms 2025-09-29 06:55:12.292918 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.46 ms 2025-09-29 06:55:12.293020 | orchestrator | 2025-09-29 06:55:12.293037 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-09-29 06:55:12.293050 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:55:12.293062 | orchestrator | rtt min/avg/max/mdev = 1.460/2.867/4.834/1.432 ms 2025-09-29 06:55:12.293321 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:55:12.293372 | orchestrator | + ping -c3 192.168.112.188 2025-09-29 06:55:12.305113 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-29 06:55:12.305258 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=7.50 ms 2025-09-29 06:55:13.301075 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.36 ms 2025-09-29 06:55:14.302405 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.95 ms 2025-09-29 06:55:14.302489 | orchestrator | 2025-09-29 06:55:14.302500 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-29 06:55:14.302509 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-29 06:55:14.302517 | orchestrator | rtt min/avg/max/mdev = 1.948/3.936/7.502/2.526 ms 2025-09-29 06:55:14.302525 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:55:14.302533 | orchestrator | + ping -c3 192.168.112.112 2025-09-29 06:55:14.311626 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-09-29 06:55:14.311701 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.11 ms 2025-09-29 06:55:15.309779 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=1.95 ms 2025-09-29 06:55:16.311631 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.60 ms 2025-09-29 06:55:16.311730 | orchestrator | 2025-09-29 06:55:16.311748 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-09-29 06:55:16.311762 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-29 06:55:16.311774 | orchestrator | rtt min/avg/max/mdev = 1.597/2.886/5.107/1.577 ms 2025-09-29 06:55:16.311786 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-29 06:55:19.083303 | orchestrator | 2025-09-29 06:55:19 | INFO  | Live migrating server 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 2025-09-29 06:55:28.468484 | orchestrator | 2025-09-29 06:55:28 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:30.919105 | orchestrator | 2025-09-29 06:55:30 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:33.400563 | orchestrator | 2025-09-29 06:55:33 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:35.690900 | orchestrator | 2025-09-29 06:55:35 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:37.994971 | orchestrator | 2025-09-29 06:55:37 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:40.278269 | orchestrator | 2025-09-29 06:55:40 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:42.570495 | orchestrator | 2025-09-29 06:55:42 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:44.979579 | orchestrator | 2025-09-29 06:55:44 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) is still in progress 2025-09-29 06:55:47.239491 | orchestrator | 2025-09-29 06:55:47 | INFO  | Live migration of 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 (test-4) completed with status ACTIVE 2025-09-29 06:55:47.239574 | orchestrator | 2025-09-29 06:55:47 | INFO  | Live migrating server b63aca68-3262-43be-aa6a-a9d0ecbda32e 2025-09-29 06:55:58.388401 | orchestrator | 2025-09-29 06:55:58 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:00.701496 | orchestrator | 2025-09-29 06:56:00 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:03.032518 | orchestrator | 2025-09-29 06:56:03 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:05.380763 | orchestrator | 2025-09-29 06:56:05 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:07.640469 | orchestrator | 2025-09-29 06:56:07 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:09.916636 | orchestrator | 2025-09-29 06:56:09 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:12.199180 | orchestrator | 2025-09-29 06:56:12 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:14.441327 | orchestrator | 2025-09-29 06:56:14 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) is still in progress 2025-09-29 06:56:16.728676 | orchestrator | 2025-09-29 06:56:16 | INFO  | Live migration of b63aca68-3262-43be-aa6a-a9d0ecbda32e (test-3) completed with status ACTIVE 2025-09-29 06:56:16.728811 | orchestrator | 2025-09-29 06:56:16 | INFO  | Live migrating server 61f9dca3-f759-42be-9bf5-b83588e0542c 2025-09-29 06:56:26.353233 | orchestrator | 2025-09-29 06:56:26 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:28.663849 | orchestrator | 2025-09-29 06:56:28 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:31.035557 | orchestrator | 2025-09-29 06:56:31 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:33.387565 | orchestrator | 2025-09-29 06:56:33 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:35.664565 | orchestrator | 2025-09-29 06:56:35 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:37.939784 | orchestrator | 2025-09-29 06:56:37 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:40.224959 | orchestrator | 2025-09-29 06:56:40 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:42.462101 | orchestrator | 2025-09-29 06:56:42 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) is still in progress 2025-09-29 06:56:44.783613 | orchestrator | 2025-09-29 06:56:44 | INFO  | Live migration of 61f9dca3-f759-42be-9bf5-b83588e0542c (test-2) completed with status ACTIVE 2025-09-29 06:56:44.783706 | orchestrator | 2025-09-29 06:56:44 | INFO  | Live migrating server d3dba11d-3892-44e1-b860-f8269185e142 2025-09-29 06:56:54.416164 | orchestrator | 2025-09-29 06:56:54 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:56:56.740729 | orchestrator | 2025-09-29 06:56:56 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:56:59.107702 | orchestrator | 2025-09-29 06:56:59 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:01.414182 | orchestrator | 2025-09-29 06:57:01 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:03.749250 | orchestrator | 2025-09-29 06:57:03 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:05.990360 | orchestrator | 2025-09-29 06:57:05 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:08.243643 | orchestrator | 2025-09-29 06:57:08 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:10.498815 | orchestrator | 2025-09-29 06:57:10 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:12.765010 | orchestrator | 2025-09-29 06:57:12 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) is still in progress 2025-09-29 06:57:15.030723 | orchestrator | 2025-09-29 06:57:15 | INFO  | Live migration of d3dba11d-3892-44e1-b860-f8269185e142 (test-1) completed with status ACTIVE 2025-09-29 06:57:15.030796 | orchestrator | 2025-09-29 06:57:15 | INFO  | Live migrating server 955c420a-2ed9-4199-812a-ecd4f86250b2 2025-09-29 06:57:26.733752 | orchestrator | 2025-09-29 06:57:26 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:29.107399 | orchestrator | 2025-09-29 06:57:29 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:31.456623 | orchestrator | 2025-09-29 06:57:31 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:33.783720 | orchestrator | 2025-09-29 06:57:33 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:36.094646 | orchestrator | 2025-09-29 06:57:36 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:38.411554 | orchestrator | 2025-09-29 06:57:38 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:40.660997 | orchestrator | 2025-09-29 06:57:40 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:43.035211 | orchestrator | 2025-09-29 06:57:43 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:45.367657 | orchestrator | 2025-09-29 06:57:45 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) is still in progress 2025-09-29 06:57:47.691242 | orchestrator | 2025-09-29 06:57:47 | INFO  | Live migration of 955c420a-2ed9-4199-812a-ecd4f86250b2 (test) completed with status ACTIVE 2025-09-29 06:57:47.881526 | orchestrator | + compute_list 2025-09-29 06:57:47.881621 | orchestrator | + osism manage compute list testbed-node-3 2025-09-29 06:57:50.338478 | orchestrator | +------+--------+----------+ 2025-09-29 06:57:50.338585 | orchestrator | | ID | Name | Status | 2025-09-29 06:57:50.338601 | orchestrator | |------+--------+----------| 2025-09-29 06:57:50.338614 | orchestrator | +------+--------+----------+ 2025-09-29 06:57:50.586741 | orchestrator | + osism manage compute list testbed-node-4 2025-09-29 06:57:53.277027 | orchestrator | +------+--------+----------+ 2025-09-29 06:57:53.277162 | orchestrator | | ID | Name | Status | 2025-09-29 06:57:53.277179 | orchestrator | |------+--------+----------| 2025-09-29 06:57:53.277192 | orchestrator | +------+--------+----------+ 2025-09-29 06:57:53.589464 | orchestrator | + osism manage compute list testbed-node-5 2025-09-29 06:57:56.611204 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:57:56.611311 | orchestrator | | ID | Name | Status | 2025-09-29 06:57:56.611327 | orchestrator | |--------------------------------------+--------+----------| 2025-09-29 06:57:56.611339 | orchestrator | | 9ecc9f0e-c01f-4952-9dc0-dea2526a2c37 | test-4 | ACTIVE | 2025-09-29 06:57:56.611350 | orchestrator | | b63aca68-3262-43be-aa6a-a9d0ecbda32e | test-3 | ACTIVE | 2025-09-29 06:57:56.611361 | orchestrator | | 61f9dca3-f759-42be-9bf5-b83588e0542c | test-2 | ACTIVE | 2025-09-29 06:57:56.611410 | orchestrator | | d3dba11d-3892-44e1-b860-f8269185e142 | test-1 | ACTIVE | 2025-09-29 06:57:56.611421 | orchestrator | | 955c420a-2ed9-4199-812a-ecd4f86250b2 | test | ACTIVE | 2025-09-29 06:57:56.611433 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-29 06:57:56.915363 | orchestrator | + server_ping 2025-09-29 06:57:56.916199 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-29 06:57:56.916533 | orchestrator | ++ tr -d '\r' 2025-09-29 06:57:59.784682 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:57:59.784789 | orchestrator | + ping -c3 192.168.112.122 2025-09-29 06:57:59.799541 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-09-29 06:57:59.799653 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=12.3 ms 2025-09-29 06:58:00.792450 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=3.19 ms 2025-09-29 06:58:01.792746 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.91 ms 2025-09-29 06:58:01.792825 | orchestrator | 2025-09-29 06:58:01.792836 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-09-29 06:58:01.792845 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:58:01.792851 | orchestrator | rtt min/avg/max/mdev = 1.908/5.792/12.282/4.618 ms 2025-09-29 06:58:01.793123 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:58:01.793134 | orchestrator | + ping -c3 192.168.112.154 2025-09-29 06:58:01.802618 | orchestrator | PING 192.168.112.154 (192.168.112.154) 56(84) bytes of data. 2025-09-29 06:58:01.802658 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=1 ttl=63 time=5.76 ms 2025-09-29 06:58:02.801922 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=2 ttl=63 time=2.67 ms 2025-09-29 06:58:03.803659 | orchestrator | 64 bytes from 192.168.112.154: icmp_seq=3 ttl=63 time=1.99 ms 2025-09-29 06:58:03.803761 | orchestrator | 2025-09-29 06:58:03.803778 | orchestrator | --- 192.168.112.154 ping statistics --- 2025-09-29 06:58:03.803792 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-29 06:58:03.803803 | orchestrator | rtt min/avg/max/mdev = 1.987/3.473/5.760/1.640 ms 2025-09-29 06:58:03.803920 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:58:03.803947 | orchestrator | + ping -c3 192.168.112.108 2025-09-29 06:58:03.816687 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-09-29 06:58:03.816768 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.68 ms 2025-09-29 06:58:04.811829 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.50 ms 2025-09-29 06:58:05.812838 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.97 ms 2025-09-29 06:58:05.812940 | orchestrator | 2025-09-29 06:58:05.812954 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-09-29 06:58:05.812966 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:58:05.812976 | orchestrator | rtt min/avg/max/mdev = 1.972/4.384/8.678/3.043 ms 2025-09-29 06:58:05.813472 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:58:05.813551 | orchestrator | + ping -c3 192.168.112.188 2025-09-29 06:58:05.827257 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-29 06:58:05.827295 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=9.37 ms 2025-09-29 06:58:06.822101 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.22 ms 2025-09-29 06:58:07.823570 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.82 ms 2025-09-29 06:58:07.823673 | orchestrator | 2025-09-29 06:58:07.823691 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-29 06:58:07.823704 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-29 06:58:07.823716 | orchestrator | rtt min/avg/max/mdev = 1.816/4.470/9.370/3.468 ms 2025-09-29 06:58:07.824067 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-29 06:58:07.824093 | orchestrator | + ping -c3 192.168.112.112 2025-09-29 06:58:07.834152 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-09-29 06:58:07.834221 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.85 ms 2025-09-29 06:58:08.832501 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.81 ms 2025-09-29 06:58:09.832755 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.92 ms 2025-09-29 06:58:09.832859 | orchestrator | 2025-09-29 06:58:09.832876 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-09-29 06:58:09.832889 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-29 06:58:09.832931 | orchestrator | rtt min/avg/max/mdev = 1.917/3.526/5.854/1.685 ms 2025-09-29 06:58:10.104891 | orchestrator | ok: Runtime: 0:19:39.976401 2025-09-29 06:58:10.158094 | 2025-09-29 06:58:10.158215 | TASK [Run tempest] 2025-09-29 06:58:10.692758 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:10.710817 | 2025-09-29 06:58:10.711016 | TASK [Check prometheus alert status] 2025-09-29 06:58:11.247973 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:11.251153 | 2025-09-29 06:58:11.251391 | PLAY RECAP 2025-09-29 06:58:11.251618 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-29 06:58:11.251711 | 2025-09-29 06:58:11.464460 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-29 06:58:11.466613 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-29 06:58:12.204226 | 2025-09-29 06:58:12.204380 | PLAY [Post output play] 2025-09-29 06:58:12.220241 | 2025-09-29 06:58:12.220374 | LOOP [stage-output : Register sources] 2025-09-29 06:58:12.284755 | 2025-09-29 06:58:12.285008 | TASK [stage-output : Check sudo] 2025-09-29 06:58:13.091398 | orchestrator | sudo: a password is required 2025-09-29 06:58:13.323080 | orchestrator | ok: Runtime: 0:00:00.022459 2025-09-29 06:58:13.337439 | 2025-09-29 06:58:13.337653 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-29 06:58:13.374587 | 2025-09-29 06:58:13.374883 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-29 06:58:13.445036 | orchestrator | ok 2025-09-29 06:58:13.454325 | 2025-09-29 06:58:13.454461 | LOOP [stage-output : Ensure target folders exist] 2025-09-29 06:58:13.895277 | orchestrator | ok: "docs" 2025-09-29 06:58:13.895627 | 2025-09-29 06:58:14.132904 | orchestrator | ok: "artifacts" 2025-09-29 06:58:14.379351 | orchestrator | ok: "logs" 2025-09-29 06:58:14.402223 | 2025-09-29 06:58:14.402395 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-29 06:58:14.441981 | 2025-09-29 06:58:14.442268 | TASK [stage-output : Make all log files readable] 2025-09-29 06:58:14.734301 | orchestrator | ok 2025-09-29 06:58:14.742129 | 2025-09-29 06:58:14.742246 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-29 06:58:14.776681 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:14.790602 | 2025-09-29 06:58:14.790747 | TASK [stage-output : Discover log files for compression] 2025-09-29 06:58:14.815082 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:14.824802 | 2025-09-29 06:58:14.824918 | LOOP [stage-output : Archive everything from logs] 2025-09-29 06:58:14.861615 | 2025-09-29 06:58:14.861757 | PLAY [Post cleanup play] 2025-09-29 06:58:14.870087 | 2025-09-29 06:58:14.870192 | TASK [Set cloud fact (Zuul deployment)] 2025-09-29 06:58:14.929938 | orchestrator | ok 2025-09-29 06:58:14.941496 | 2025-09-29 06:58:14.941610 | TASK [Set cloud fact (local deployment)] 2025-09-29 06:58:14.967672 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:14.976152 | 2025-09-29 06:58:14.976254 | TASK [Clean the cloud environment] 2025-09-29 06:58:15.580075 | orchestrator | 2025-09-29 06:58:15 - clean up servers 2025-09-29 06:58:16.310713 | orchestrator | 2025-09-29 06:58:16 - testbed-manager 2025-09-29 06:58:16.416576 | orchestrator | 2025-09-29 06:58:16 - testbed-node-3 2025-09-29 06:58:16.503413 | orchestrator | 2025-09-29 06:58:16 - testbed-node-0 2025-09-29 06:58:16.598514 | orchestrator | 2025-09-29 06:58:16 - testbed-node-5 2025-09-29 06:58:16.703854 | orchestrator | 2025-09-29 06:58:16 - testbed-node-2 2025-09-29 06:58:16.801837 | orchestrator | 2025-09-29 06:58:16 - testbed-node-4 2025-09-29 06:58:16.898912 | orchestrator | 2025-09-29 06:58:16 - testbed-node-1 2025-09-29 06:58:17.007083 | orchestrator | 2025-09-29 06:58:17 - clean up keypairs 2025-09-29 06:58:17.026645 | orchestrator | 2025-09-29 06:58:17 - testbed 2025-09-29 06:58:17.054862 | orchestrator | 2025-09-29 06:58:17 - wait for servers to be gone 2025-09-29 06:58:25.792956 | orchestrator | 2025-09-29 06:58:25 - clean up ports 2025-09-29 06:58:25.957970 | orchestrator | 2025-09-29 06:58:25 - 6dd62bde-481a-48f2-81b4-eb91bee3b1ae 2025-09-29 06:58:26.192332 | orchestrator | 2025-09-29 06:58:26 - 6f39fb26-5ac0-4b29-99cb-cd6dfecb42a8 2025-09-29 06:58:26.498927 | orchestrator | 2025-09-29 06:58:26 - 85ad8370-4f8d-4313-84f4-9ebafb85ac00 2025-09-29 06:58:26.986599 | orchestrator | 2025-09-29 06:58:26 - 8b035d24-02a6-45f2-8d2a-ca132bc1a781 2025-09-29 06:58:27.200342 | orchestrator | 2025-09-29 06:58:27 - d3dab759-1284-4317-9f0d-c714c6b6d53e 2025-09-29 06:58:27.418831 | orchestrator | 2025-09-29 06:58:27 - edc3ad44-6dbe-42de-9bc0-6229599565bc 2025-09-29 06:58:27.622256 | orchestrator | 2025-09-29 06:58:27 - ef7d5621-3c2a-4d6d-9048-986fa7905f26 2025-09-29 06:58:27.851568 | orchestrator | 2025-09-29 06:58:27 - clean up volumes 2025-09-29 06:58:27.974206 | orchestrator | 2025-09-29 06:58:27 - testbed-volume-2-node-base 2025-09-29 06:58:28.011453 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-0-node-base 2025-09-29 06:58:28.055199 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-5-node-base 2025-09-29 06:58:28.099202 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-3-node-base 2025-09-29 06:58:28.145738 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-1-node-base 2025-09-29 06:58:28.185555 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-4-node-base 2025-09-29 06:58:28.233631 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-manager-base 2025-09-29 06:58:28.273327 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-8-node-5 2025-09-29 06:58:28.313991 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-4-node-4 2025-09-29 06:58:28.357140 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-5-node-5 2025-09-29 06:58:28.398849 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-1-node-4 2025-09-29 06:58:28.445055 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-7-node-4 2025-09-29 06:58:28.489246 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-6-node-3 2025-09-29 06:58:28.529119 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-0-node-3 2025-09-29 06:58:28.568515 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-3-node-3 2025-09-29 06:58:28.617091 | orchestrator | 2025-09-29 06:58:28 - testbed-volume-2-node-5 2025-09-29 06:58:28.657369 | orchestrator | 2025-09-29 06:58:28 - disconnect routers 2025-09-29 06:58:28.770886 | orchestrator | 2025-09-29 06:58:28 - testbed 2025-09-29 06:58:29.775471 | orchestrator | 2025-09-29 06:58:29 - clean up subnets 2025-09-29 06:58:29.822262 | orchestrator | 2025-09-29 06:58:29 - subnet-testbed-management 2025-09-29 06:58:29.999228 | orchestrator | 2025-09-29 06:58:29 - clean up networks 2025-09-29 06:58:30.167623 | orchestrator | 2025-09-29 06:58:30 - net-testbed-management 2025-09-29 06:58:30.440863 | orchestrator | 2025-09-29 06:58:30 - clean up security groups 2025-09-29 06:58:30.483899 | orchestrator | 2025-09-29 06:58:30 - testbed-node 2025-09-29 06:58:30.599373 | orchestrator | 2025-09-29 06:58:30 - testbed-management 2025-09-29 06:58:30.732537 | orchestrator | 2025-09-29 06:58:30 - clean up floating ips 2025-09-29 06:58:30.766091 | orchestrator | 2025-09-29 06:58:30 - 81.163.193.20 2025-09-29 06:58:31.680592 | orchestrator | 2025-09-29 06:58:31 - clean up routers 2025-09-29 06:58:32.299857 | orchestrator | 2025-09-29 06:58:32 - testbed 2025-09-29 06:58:34.545462 | orchestrator | ok: Runtime: 0:00:18.933130 2025-09-29 06:58:34.549685 | 2025-09-29 06:58:34.549856 | PLAY RECAP 2025-09-29 06:58:34.550009 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-29 06:58:34.550077 | 2025-09-29 06:58:34.674655 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-29 06:58:34.677118 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-29 06:58:35.381607 | 2025-09-29 06:58:35.381752 | PLAY [Cleanup play] 2025-09-29 06:58:35.397745 | 2025-09-29 06:58:35.397864 | TASK [Set cloud fact (Zuul deployment)] 2025-09-29 06:58:35.451619 | orchestrator | ok 2025-09-29 06:58:35.463566 | 2025-09-29 06:58:35.463703 | TASK [Set cloud fact (local deployment)] 2025-09-29 06:58:35.488610 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:35.500565 | 2025-09-29 06:58:35.500678 | TASK [Clean the cloud environment] 2025-09-29 06:58:36.599283 | orchestrator | 2025-09-29 06:58:36 - clean up servers 2025-09-29 06:58:37.060568 | orchestrator | 2025-09-29 06:58:37 - clean up keypairs 2025-09-29 06:58:37.073391 | orchestrator | 2025-09-29 06:58:37 - wait for servers to be gone 2025-09-29 06:58:37.113127 | orchestrator | 2025-09-29 06:58:37 - clean up ports 2025-09-29 06:58:37.185133 | orchestrator | 2025-09-29 06:58:37 - clean up volumes 2025-09-29 06:58:37.243213 | orchestrator | 2025-09-29 06:58:37 - disconnect routers 2025-09-29 06:58:37.272400 | orchestrator | 2025-09-29 06:58:37 - clean up subnets 2025-09-29 06:58:37.293750 | orchestrator | 2025-09-29 06:58:37 - clean up networks 2025-09-29 06:58:37.414355 | orchestrator | 2025-09-29 06:58:37 - clean up security groups 2025-09-29 06:58:37.453898 | orchestrator | 2025-09-29 06:58:37 - clean up floating ips 2025-09-29 06:58:37.477916 | orchestrator | 2025-09-29 06:58:37 - clean up routers 2025-09-29 06:58:38.037702 | orchestrator | ok: Runtime: 0:00:01.228738 2025-09-29 06:58:38.042337 | 2025-09-29 06:58:38.042508 | PLAY RECAP 2025-09-29 06:58:38.042630 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-29 06:58:38.042694 | 2025-09-29 06:58:38.171421 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-29 06:58:38.172430 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-29 06:58:38.887580 | 2025-09-29 06:58:38.887725 | PLAY [Base post-fetch] 2025-09-29 06:58:38.902867 | 2025-09-29 06:58:38.903029 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-29 06:58:38.957674 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:38.964152 | 2025-09-29 06:58:38.964273 | TASK [fetch-output : Set log path for single node] 2025-09-29 06:58:39.006827 | orchestrator | ok 2025-09-29 06:58:39.014040 | 2025-09-29 06:58:39.014156 | LOOP [fetch-output : Ensure local output dirs] 2025-09-29 06:58:39.463239 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/logs" 2025-09-29 06:58:39.742568 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/artifacts" 2025-09-29 06:58:40.009973 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/6d60e6d45652465bae1d8101981b86c3/work/docs" 2025-09-29 06:58:40.030380 | 2025-09-29 06:58:40.030501 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-29 06:58:40.943986 | orchestrator | changed: .d..t...... ./ 2025-09-29 06:58:40.944233 | orchestrator | changed: All items complete 2025-09-29 06:58:40.944268 | 2025-09-29 06:58:41.674593 | orchestrator | changed: .d..t...... ./ 2025-09-29 06:58:42.402258 | orchestrator | changed: .d..t...... ./ 2025-09-29 06:58:42.439238 | 2025-09-29 06:58:42.439419 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-29 06:58:42.475285 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:42.478541 | orchestrator | skipping: Conditional result was False 2025-09-29 06:58:42.500540 | 2025-09-29 06:58:42.500651 | PLAY RECAP 2025-09-29 06:58:42.500734 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-29 06:58:42.500777 | 2025-09-29 06:58:42.618398 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-29 06:58:42.620026 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-29 06:58:43.331567 | 2025-09-29 06:58:43.331720 | PLAY [Base post] 2025-09-29 06:58:43.345745 | 2025-09-29 06:58:43.345869 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-29 06:58:44.240129 | orchestrator | changed 2025-09-29 06:58:44.250041 | 2025-09-29 06:58:44.250172 | PLAY RECAP 2025-09-29 06:58:44.250259 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-29 06:58:44.250341 | 2025-09-29 06:58:44.367135 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-29 06:58:44.369427 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-29 06:58:45.135747 | 2025-09-29 06:58:45.135912 | PLAY [Base post-logs] 2025-09-29 06:58:45.146311 | 2025-09-29 06:58:45.146438 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-29 06:58:45.591806 | localhost | changed 2025-09-29 06:58:45.607096 | 2025-09-29 06:58:45.607264 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-29 06:58:45.645650 | localhost | ok 2025-09-29 06:58:45.652585 | 2025-09-29 06:58:45.652754 | TASK [Set zuul-log-path fact] 2025-09-29 06:58:45.671794 | localhost | ok 2025-09-29 06:58:45.686446 | 2025-09-29 06:58:45.686586 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-29 06:58:45.724520 | localhost | ok 2025-09-29 06:58:45.731006 | 2025-09-29 06:58:45.731189 | TASK [upload-logs : Create log directories] 2025-09-29 06:58:46.246193 | localhost | changed 2025-09-29 06:58:46.249048 | 2025-09-29 06:58:46.249156 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-29 06:58:46.759233 | localhost -> localhost | ok: Runtime: 0:00:00.006722 2025-09-29 06:58:46.763285 | 2025-09-29 06:58:46.763407 | TASK [upload-logs : Upload logs to log server] 2025-09-29 06:58:47.296126 | localhost | Output suppressed because no_log was given 2025-09-29 06:58:47.297969 | 2025-09-29 06:58:47.298070 | LOOP [upload-logs : Compress console log and json output] 2025-09-29 06:58:47.349481 | localhost | skipping: Conditional result was False 2025-09-29 06:58:47.354391 | localhost | skipping: Conditional result was False 2025-09-29 06:58:47.367830 | 2025-09-29 06:58:47.368065 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-29 06:58:47.414321 | localhost | skipping: Conditional result was False 2025-09-29 06:58:47.414916 | 2025-09-29 06:58:47.418558 | localhost | skipping: Conditional result was False 2025-09-29 06:58:47.431071 | 2025-09-29 06:58:47.431292 | LOOP [upload-logs : Upload console log and json output]